WorldWideScience

Sample records for exponential decay models

  1. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  2. An exponential decay model for mediation.

    Science.gov (United States)

    Fritz, Matthew S

    2014-10-01

    Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, address many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed.

  3. Does proton decay follow the exponential law

    International Nuclear Information System (INIS)

    Sanchez-Gomez, J.L.; Alvarez-Estrada, R.F.; Fernandez, L.A.

    1984-01-01

    In this paper, we discuss the exponential law for proton decay. By using a simple model based upon SU(5)GUT and the current theories of hadron structure, we explicitely show that the corrections to the Wigner-Weisskopf approximation are quite negligible for present day protons, so that their eventual decay should follow the exponential law. Previous works are critically analyzed. (orig.)

  4. Stretched-exponential decay functions from a self-consistent model of dielectric relaxation

    International Nuclear Information System (INIS)

    Milovanov, A.V.; Rasmussen, J.J.; Rypdal, K.

    2008-01-01

    There are many materials whose dielectric properties are described by a stretched exponential, the so-called Kohlrausch-Williams-Watts (KWW) relaxation function. Its physical origin and statistical-mechanical foundation have been a matter of debate in the literature. In this Letter we suggest a model of dielectric relaxation, which naturally leads to a stretched exponential decay function. Some essential characteristics of the underlying charge conduction mechanisms are considered. A kinetic description of the relaxation and charge transport processes is proposed in terms of equations with time-fractional derivatives

  5. The distance-decay function of geographical gravity model: Power law or exponential law?

    International Nuclear Information System (INIS)

    Chen, Yanguang

    2015-01-01

    Highlights: •The distance-decay exponent of the gravity model is a fractal dimension. •Entropy maximization accounts for the gravity model based on power law decay. •Allometric scaling relations relate gravity models with spatial interaction models. •The four-parameter gravity models have dual mathematical expressions. •The inverse power law is the most probable distance-decay function. -- Abstract: The distance-decay function of the geographical gravity model is originally an inverse power law, which suggests a scaling process in spatial interaction. However, the distance exponent of the model cannot be reasonably explained with the ideas from Euclidean geometry. This results in a dimension dilemma in geographical analysis. Consequently, a negative exponential function was used to replace the inverse power function to serve for a distance-decay function. But a new puzzle arose that the exponential-based gravity model goes against the first law of geography. This paper is devoted for solving these kinds of problems by mathematical reasoning and empirical analysis. New findings are as follows. First, the distance exponent of the gravity model is demonstrated to be a fractal dimension using the geometric measure relation. Second, the similarities and differences between the gravity models and spatial interaction models are revealed using allometric relations. Third, a four-parameter gravity model possesses a symmetrical expression, and we need dual gravity models to describe spatial flows. The observational data of China's cities and regions (29 elements indicative of 841 data points) in 2010 are employed to verify the theoretical inferences. A conclusion can be reached that the geographical gravity model based on power-law decay is more suitable for analyzing large, complex, and scale-free regional and urban systems. This study lends further support to the suggestion that the underlying rationale of fractal structure is entropy maximization. Moreover

  6. Is Radioactive Decay Really Exponential?

    OpenAIRE

    Aston, Philip J.

    2012-01-01

    Radioactive decay of an unstable isotope is widely believed to be exponential. This view is supported by experiments on rapidly decaying isotopes but is more difficult to verify for slowly decaying isotopes. The decay of 14C can be calibrated over a period of 12,550 years by comparing radiocarbon dates with dates obtained from dendrochronology. It is well known that this approach shows that radiocarbon dates of over 3,000 years are in error, which is generally attributed to past variation in ...

  7. Mechanistic formulation of a lineal-quadratic-linear (LQL) model: Split-dose experiments and exponentially decaying sources

    International Nuclear Information System (INIS)

    Guerrero, Mariana; Carlone, Marco

    2010-01-01

    Purpose: In recent years, several models were proposed that modify the standard linear-quadratic (LQ) model to make the predicted survival curve linear at high doses. Most of these models are purely phenomenological and can only be applied in the particular case of acute doses per fraction. The authors consider a mechanistic formulation of a linear-quadratic-linear (LQL) model in the case of split-dose experiments and exponentially decaying sources. This model provides a comprehensive description of radiation response for arbitrary dose rate and fractionation with only one additional parameter. Methods: The authors use a compartmental formulation of the LQL model from the literature. They analytically solve the model's differential equations for the case of a split-dose experiment and for an exponentially decaying source. They compare the solutions of the survival fraction with the standard LQ equations and with the lethal-potentially lethal (LPL) model. Results: In the case of the split-dose experiment, the LQL model predicts a recovery ratio as a function of dose per fraction that deviates from the square law of the standard LQ. The survival fraction as a function of time between fractions follows a similar exponential law as the LQ but adds a multiplicative factor to the LQ parameter β. The LQL solution for the split-dose experiment is very close to the LPL prediction. For the decaying source, the differences between the LQL and the LQ solutions are negligible when the half-life of the source is much larger than the characteristic repair time, which is the clinically relevant case. Conclusions: The compartmental formulation of the LQL model can be used for arbitrary dose rates and provides a comprehensive description of dose response. When the survival fraction for acute doses is linear for high dose, a deviation of the square law formula of the recovery ratio for split doses is also predicted.

  8. Comparison of the predictions of the LQ and CRE models for normal tissue damage due to biologically targeted radiotherapy with exponentially decaying dose rates

    International Nuclear Information System (INIS)

    O'Donoghue, J.A.; West of Schotland Health Boards, Glasgow

    1989-01-01

    For biologically targeted radiotherapy organ dose rates may be complex functions of time, related to the biodistribution kinetics of the delivery vehicle and radiolabel. The simples situation is where dose rates are exponentially decaying functions of time. Two normal tissue isoeffect models enable the effects of exponentially decaying dose rates to be addressed. These are the extension of the linear-quadratic model and the cumulative radiation effect model. This communication will compare the predictions of these models. (author). 14 refs.; 1 fig

  9. Demonstration of the exponential decay law using beer froth

    International Nuclear Information System (INIS)

    Leike, A.

    2002-01-01

    The volume of beer froth decays exponentially with time. This property is used to demonstrate the exponential decay law in the classroom. The decay constant depends on the type of beer and can be used to differentiate between different beers. The analysis shows in a transparent way the techniques of data analysis commonly used in science - consistency checks of theoretical models with the data, parameter estimation and determination of confidence intervals. (author)

  10. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Science.gov (United States)

    Zhang, Jin-Yu; Meng, Xiang-Bing; Xu, Wei; Zhang, Wei; Zhang, Yong

    2014-01-01

    This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method. PMID:24696649

  11. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Directory of Open Access Journals (Sweden)

    Jin-Yu Zhang

    2014-01-01

    Full Text Available This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method.

  12. Statistical analysis of time-resolved emission from ensembles of semiconductor quantum dots: Interpretation of exponential decay models

    DEFF Research Database (Denmark)

    Van Driel, A.F.; Nikolaev, I.S.; Vergeer, P.

    2007-01-01

    We present a statistical analysis of time-resolved spontaneous emission decay curves from ensembles of emitters, such as semiconductor quantum dots, with the aim of interpreting ubiquitous non-single-exponential decay. Contrary to what is widely assumed, the density of excited emitters...... and the intensity in an emission decay curve are not proportional, but the density is a time integral of the intensity. The integral relation is crucial to correctly interpret non-single-exponential decay. We derive the proper normalization for both a discrete and a continuous distribution of rates, where every...... decay component is multiplied by its radiative decay rate. A central result of our paper is the derivation of the emission decay curve when both radiative and nonradiative decays are independently distributed. In this case, the well-known emission quantum efficiency can no longer be expressed...

  13. Quantum Zeno effect for exponentially decaying systems

    International Nuclear Information System (INIS)

    Koshino, Kazuki; Shimizu, Akira

    2004-01-01

    The quantum Zeno effect - suppression of decay by frequent measurements - was believed to occur only when the response of the detector is so quick that the initial tiny deviation from the exponential decay law is detectable. However, we show that it can occur even for exactly exponentially decaying systems, for which this condition is never satisfied, by considering a realistic case where the detector has a finite energy band of detection. The conventional theories correspond to the limit of an infinite bandwidth. This implies that the Zeno effect occurs more widely than expected thus far

  14. Wealth distribution, Pareto law, and stretched exponential decay of money: Computer simulations analysis of agent-based models

    Science.gov (United States)

    Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf

    2018-01-01

    We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.

  15. Neurophysiological bases of exponential sensory decay and top-down memory retrieval: a model.

    Science.gov (United States)

    Zylberberg, Ariel; Dehaene, Stanislas; Mindlin, Gabriel B; Sigman, Mariano

    2009-01-01

    Behavioral observations suggest that multiple sensory elements can be maintained for a short time, forming a perceptual buffer which fades after a few hundred milliseconds. Only a subset of this perceptual buffer can be accessed under top-down control and broadcasted to working memory and consciousness. In turn, single-cell studies in awake-behaving monkeys have identified two distinct waves of response to a sensory stimulus: a first transient response largely determined by stimulus properties and a second wave dependent on behavioral relevance, context and learning. Here we propose a simple biophysical scheme which bridges these observations and establishes concrete predictions for neurophsyiological experiments in which the temporal interval between stimulus presentation and top-down allocation is controlled experimentally. Inspired in single-cell observations, the model involves a first transient response and a second stage of amplification and retrieval, which are implemented biophysically by distinct operational modes of the same circuit, regulated by external currents. We explicitly investigated the neuronal dynamics, the memory trace of a presented stimulus and the probability of correct retrieval, when these two stages were bracketed by a temporal gap. The model predicts correctly the dependence of performance with response times in interference experiments suggesting that sensory buffering does not require a specific dedicated mechanism and establishing a direct link between biophysical manipulations and behavioral observations leading to concrete predictions.

  16. Neurophysiological bases of exponential sensory decay and top-down memory retrieval: a model

    Directory of Open Access Journals (Sweden)

    Ariel Zylberberg

    2009-03-01

    Full Text Available Behavioral observations suggest that multiple sensory elements can be maintained for a short time, forming a perceptual buffer which fades after a few hundred milliseconds. Only a subset of this perceptual buffer can be accessed under top-down control and broadcasted to working memory and consciousness. In turn, single-cell studies in awake-behaving monkeys have identified two distinct waves of response to a sensory stimulus: a first transient response largely determined by stimulus properties and a second wave dependent on behavioral relevance, context and learning. Here we propose a simple biophysical scheme which bridges these observations and establishes concrete predictions for neurophsyiological experiments in which the temporal interval between stimulus presentation and top-down allocation is controlled experimentally. Inspired in single-cell observations, the model involves a first transient response and a second stage of amplification and retrieval, which are implemented biophysically by distinct operational modes of the same circuit, regulated by external currents. We explicitly investigated the neuronal dynamics, the memory trace of a presented stimulus and the probability of correct retrieval, when these two stages were bracketed by a temporal gap. The model predicts correctly the dependence of performance with response times in interference experiments suggesting that sensory buffering does not require a specific dedicated mechanism and establishing a direct link between biophysical manipulations and behavioral observations leading to concrete predictions.

  17. Wegner estimate and localization for alloy-type models with sign-changing exponentially decaying single-site potentials

    Science.gov (United States)

    Leonhardt, Karsten; Peyerimhoff, Norbert; Tautenhahn, Martin; Veselić, Ivan

    2015-05-01

    We study Schrödinger operators on L2(ℝd) and ℓ2(ℤd) with a random potential of alloy-type. The single-site potential is assumed to be exponentially decaying but not necessarily of fixed sign. In the continuum setting, we require a generalized step-function shape. Wegner estimates are bounds on the average number of eigenvalues in an energy interval of finite box restrictions of these types of operators. In the described situation, a Wegner estimate, which is polynomial in the volume of the box and linear in the size of the energy interval, holds. We apply the established Wegner estimate as an ingredient for a localization proof via multiscale analysis.

  18. Is the basic law of radioactive decay exponential?

    International Nuclear Information System (INIS)

    Gopych, P.M.; Zalyubovskii, I.I.

    1988-01-01

    Basic theoretical approaches to the explanation of the observed exponential nature of the decay law are discussed together with the hypothesis that it is not exponential. The significance of this question and its connection with fundamental problems of modern physics are considered. The results of experiments relating to investigation of the form of the decay law are given

  19. Exponential decay for solutions to semilinear damped wave equation

    KAUST Repository

    Gerbi, Stéphane

    2011-10-01

    This paper is concerned with decay estimate of solutions to the semilinear wave equation with strong damping in a bounded domain. Intro- ducing an appropriate Lyapunov function, we prove that when the damping is linear, we can find initial data, for which the solution decays exponentially. This result improves an early one in [4].

  20. Notes on spectrum and exponential decay in nonautonomous evolutionary equations

    Directory of Open Access Journals (Sweden)

    Christian Pötzsche

    2016-08-01

    Full Text Available We first determine the dichotomy (Sacker-Sell spectrum for certain nonautonomous linear evolutionary equations induced by a class of parabolic PDE systems. Having this information at hand, we underline the applicability of our second result: If the widths of the gaps in the dichotomy spectrum are bounded away from $0$, then one can rule out the existence of super-exponentially decaying (i.e. slow solutions of semi-linear evolutionary equations.

  1. Double-exponential decay of orientational correlations in semiflexible polyelectrolytes.

    Science.gov (United States)

    Bačová, P; Košovan, P; Uhlík, F; Kuldová, J; Limpouchová, Z; Procházka, K

    2012-06-01

    In this paper we revisited the problem of persistence length of polyelectrolytes. We performed a series of Molecular Dynamics simulations using the Debye-Hückel approximation for electrostatics to test several equations which go beyond the classical description of Odijk, Skolnick and Fixman (OSF). The data confirm earlier observations that in the limit of large contour separations the decay of orientational correlations can be described by a single-exponential function and the decay length can be described by the OSF relation. However, at short countour separations the behaviour is more complex. Recent equations which introduce more complicated expressions and an additional length scale could describe the results very well on both the short and the long length scale. The equation of Manghi and Netz when used without adjustable parameters could capture the qualitative trend but deviated in a quantitative comparison. Better quantitative agreement within the estimated error could be obtained using three equations with one adjustable parameter: 1) the equation of Manghi and Netz; 2) the equation proposed by us in this paper; 3) the equation proposed by Cannavacciuolo and Pedersen. Two characteristic length scales can be identified in the data: the intrinsic or bare persistence length and the electrostatic persistence length. All three equations use a single parameter to describe a smooth crossover from the short-range behaviour dominated by the intrinsic stiffness of the chain to the long-range OSF-like behaviour.

  2. An empirical test of pseudo random number generators by means of an exponential decaying process

    International Nuclear Information System (INIS)

    Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A.; Mora F, L.E.

    2007-01-01

    Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)

  3. Exponential decay rate of the power spectrum for solutions of the Navier--Stokes equations

    International Nuclear Information System (INIS)

    Doering, C.R.; Titi, E.S.

    1995-01-01

    Using a method developed by Foias and Temam [J. Funct. Anal. 87, 359 (1989)], exponential decay of the spatial Fourier power spectrum for solutions of the incompressible Navier--Stokes equations is established and explicit rigorous lower bounds on a small length scale defined by the exponential decay rate are obtained

  4. A method for searching the possible deviations from exponential decay law

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Tran Vien Ha

    1993-01-01

    A continuous kinetic function approach is proposed for analyzing the experimental decay curves. In the case of purely exponential behaviour, the values of kinetic function are the same at different ages of the investigated radionuclide. The deviation from main decay curve could be found by a comparison of experimental kinetic function values with those obtained in purely exponential case. (author). 12 refs

  5. Complex degradation processes lead to non-exponential decay patterns and age-dependent decay rates of messenger RNA.

    Directory of Open Access Journals (Sweden)

    Carlus Deneke

    Full Text Available Experimental studies on mRNA stability have established several, qualitatively distinct decay patterns for the amount of mRNA within the living cell. Furthermore, a variety of different and complex biochemical pathways for mRNA degradation have been identified. The central aim of this paper is to bring together both the experimental evidence about the decay patterns and the biochemical knowledge about the multi-step nature of mRNA degradation in a coherent mathematical theory. We first introduce a mathematical relationship between the mRNA decay pattern and the lifetime distribution of individual mRNA molecules. This relationship reveals that the mRNA decay patterns at steady state expression level must obey a general convexity condition, which applies to any degradation mechanism. Next, we develop a theory, formulated as a Markov chain model, that recapitulates some aspects of the multi-step nature of mRNA degradation. We apply our theory to experimental data for yeast and explicitly derive the lifetime distribution of the corresponding mRNAs. Thereby, we show how to extract single-molecule properties of an mRNA, such as the age-dependent decay rate and the residual lifetime. Finally, we analyze the decay patterns of the whole translatome of yeast cells and show that yeast mRNAs can be grouped into three broad classes that exhibit three distinct decay patterns. This paper provides both a method to accurately analyze non-exponential mRNA decay patterns and a tool to validate different models of degradation using decay data.

  6. Improvement of the exponential experiment system for the automatical and accurate measurement of the exponential decay constant

    International Nuclear Information System (INIS)

    Shin, Hee Sung; Jang, Ji Woon; Lee, Yoon Hee; Hwang, Yong Hwa; Kim, Ho Dong

    2004-01-01

    The previous exponential experiment system has been improved for the automatical and accurate axial movement of the neutron source and detector with attaching the automatical control system which consists of a Programmable Logical Controller(PLC) and a stepping motor set. The automatic control program which controls MCA and PLC consistently has been also developed on the basis of GENIE 2000 library. The exponential experiments have been carried out for Kori 1 unit spent fuel assemblies, C14, J14 and G23, and Kori 2 unit spent fuel assembly, J44, using the improved systematical measurement system. As the results, the average exponential decay constants for 4 assemblies are determined to be 0.1302, 0.1267, 0.1247, and 0.1210, respectively, with the application of poisson regression

  7. Double-Exponentially Decayed Photoionization in CREI Effect: Numerical Experiment on 3D H2+

    International Nuclear Information System (INIS)

    Feng, Li; Ting-Ying, Wang; Gui-Zhong, Zhang; Wang-Hua, Xiang; III, W. T. Hill

    2008-01-01

    On the platform of the 3D H 2 + system, we perform a numerical simulation of its photoionization rate under excitation of weak to intense laser intensities with varying pulse durations and wavelengths. A novel method is proposed for calculating the photoionization rate: a double exponential decay of ionization probability is best suited for fitting this rate. Confirmation of the well-documented charge-resonance-enhanced ionization (CREI) effect at medium laser intensity and finding of ionization saturation at high light intensity corroborate the robustness of the suggested double-exponential decay process. Surveying the spatial and temporal variations of electron wavefunctions uncovers a mechanism for the double-exponentially decayed photoionization probability as onset of electron ionization along extra degree of freedom. Henceforth, the new method makes clear the origins of peak features in photoionization rate versus internuclear separation. It is believed that this multi-exponentially decayed ionization mechanism is applicable to systems with more degrees of motion

  8. Exponential decay and exponential recovery of modal gains in high count rate channel electron multipliers

    International Nuclear Information System (INIS)

    Hahn, S.F.; Burch, J.L.

    1980-01-01

    A series of data on high count rate channel electron multipliers revealed an initial drop and subsequent recovery of gains in exponential fashion. The FWHM of the pulse height distribution at the initial stage of testing can be used as a good criterion for the selection of operating bias voltage of the channel electron multiplier

  9. Stimulation Efficiency With Decaying Exponential Waveforms in a Wirelessly Powered Switched-Capacitor Discharge Stimulation System.

    Science.gov (United States)

    Lee, Hyung-Min; Howell, Bryan; Grill, Warren M; Ghovanloo, Maysam

    2018-05-01

    The purpose of this study was to test the feasibility of using a switched-capacitor discharge stimulation (SCDS) system for electrical stimulation, and, subsequently, determine the overall energy saved compared to a conventional stimulator. We have constructed a computational model by pairing an image-based volume conductor model of the cat head with cable models of corticospinal tract (CST) axons and quantified the theoretical stimulation efficiency of rectangular and decaying exponential waveforms, produced by conventional and SCDS systems, respectively. Subsequently, the model predictions were tested in vivo by activating axons in the posterior internal capsule and recording evoked electromyography (EMG) in the contralateral upper arm muscles. Compared to rectangular waveforms, decaying exponential waveforms with time constants >500 μs were predicted to require 2%-4% less stimulus energy to activate directly models of CST axons and 0.4%-2% less stimulus energy to evoke EMG activity in vivo. Using the calculated wireless input energy of the stimulation system and the measured stimulus energies required to evoke EMG activity, we predict that an SCDS implantable pulse generator (IPG) will require 40% less input energy than a conventional IPG to activate target neural elements. A wireless SCDS IPG that is more energy efficient than a conventional IPG will reduce the size of an implant, require that less wireless energy be transmitted through the skin, and extend the lifetime of the battery in the external power transmitter.

  10. Exponential decay for solutions to semilinear damped wave equation

    KAUST Repository

    Gerbi, Sté phane; Said-Houari, Belkacem

    2011-01-01

    This paper is concerned with decay estimate of solutions to the semilinear wave equation with strong damping in a bounded domain. Intro- ducing an appropriate Lyapunov function, we prove that when the damping is linear, we can find initial data

  11. On the relation between Lyapunov exponents and exponential decay of correlations

    International Nuclear Information System (INIS)

    Slipantschuk, Julia; Bandtlow, Oscar F; Just, Wolfram

    2013-01-01

    Chaotic dynamics with sensitive dependence on initial conditions may result in exponential decay of correlation functions. We show that for one-dimensional interval maps the corresponding quantities, that is, Lyapunov exponents and exponential decay rates, are related. More specifically, for piecewise linear expanding Markov maps observed via piecewise analytic functions, we show that the decay rate is bounded above by twice the Lyapunov exponent, that is, we establish lower bounds for the subleading eigenvalue of the corresponding Perron–Frobenius operator. In addition, we comment on similar relations for general piecewise smooth expanding maps. (paper)

  12. Efficient full decay inversion of MRS data with a stretched-exponential approximation of the ? distribution

    Science.gov (United States)

    Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.

    2012-08-01

    We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining

  13. Survival analysis approach to account for non-exponential decay rate effects in lifetime experiments

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevincoakley@nist.gov [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S.; Huber, M.G. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); Huffer, C.R.; Huffman, P.R. [North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Marley, D.E. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Mumm, H.P. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); O' Shaughnessy, C.M. [University of North Carolina at Chapel Hill, 120 E. Cameron Ave., CB #3255, Chapel Hill, NC 27599 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Schelhammer, K.W. [North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Thompson, A.K.; Yue, A.T. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States)

    2016-03-21

    In experiments that measure the lifetime of trapped particles, in addition to loss mechanisms with exponential survival probability functions, particles can be lost by mechanisms with non-exponential survival probability functions. Failure to account for such loss mechanisms produces systematic measurement error and associated systematic uncertainties in these measurements. In this work, we develop a general competing risks survival analysis method to account for the joint effect of loss mechanisms with either exponential or non-exponential survival probability functions, and a method to quantify the size of systematic effects and associated uncertainties for lifetime estimates. As a case study, we apply our survival analysis formalism and method to the Ultra Cold Neutron lifetime experiment at NIST. In this experiment, neutrons can escape a magnetic trap before they decay due to a wall loss mechanism with an associated non-exponential survival probability function.

  14. Survival analysis approach to account for non-exponential decay rate effects in lifetime experiments

    International Nuclear Information System (INIS)

    Coakley, K.J.; Dewey, M.S.; Huber, M.G.; Huffer, C.R.; Huffman, P.R.; Marley, D.E.; Mumm, H.P.; O'Shaughnessy, C.M.; Schelhammer, K.W.; Thompson, A.K.; Yue, A.T.

    2016-01-01

    In experiments that measure the lifetime of trapped particles, in addition to loss mechanisms with exponential survival probability functions, particles can be lost by mechanisms with non-exponential survival probability functions. Failure to account for such loss mechanisms produces systematic measurement error and associated systematic uncertainties in these measurements. In this work, we develop a general competing risks survival analysis method to account for the joint effect of loss mechanisms with either exponential or non-exponential survival probability functions, and a method to quantify the size of systematic effects and associated uncertainties for lifetime estimates. As a case study, we apply our survival analysis formalism and method to the Ultra Cold Neutron lifetime experiment at NIST. In this experiment, neutrons can escape a magnetic trap before they decay due to a wall loss mechanism with an associated non-exponential survival probability function.

  15. THE Ep EVOLUTIONARY SLOPE WITHIN THE DECAY PHASE OF 'FAST RISE AND EXPONENTIAL DECAY' GAMMA-RAY BURST PULSES

    International Nuclear Information System (INIS)

    Peng, Z. Y.; Ma, L.; Yin, Y.; Zhao, X. H.; Fang, L. M.; Bao, Y. Y.

    2009-01-01

    Employing two samples containing of 56 and 59 well-separated fast rise and exponential decay gamma-ray burst pulses whose spectra are fitted by the Band spectrum and Compton model, respectively, we have investigated the evolutionary slope of E p (where E p is the peak energy in the νFν spectrum) with time during the pulse decay phase. The bursts in the samples were observed by the Burst and Transient Source Experiment on the Compton Gamma Ray Observatory. We first test the E p evolutionary slope during the pulse decay phase predicted by Lu et al. based on the model of highly symmetric expanding fireballs in which the curvature effect of the expanding fireball surface is the key factor concerned. It is found that the evolutionary slopes are normally distributed for both samples and concentrated around the values of 0.73 and 0.76 for Band and Compton model, respectively, which is in good agreement with the theoretical expectation of Lu et al.. However, the inconsistency with their results is that the intrinsic spectra of most of bursts may bear the Comptonized or thermal synchrotron spectrum, rather than the Band spectrum. The relationships between the evolutionary slope and the spectral parameters are also checked. We show that the slope is correlated with E p of time-integrated spectra as well as the photon flux but anticorrelated with the lower energy index α. In addition, a correlation between the slope and the intrinsic E p derived by using the pseudo-redshift is also identified. The mechanisms of these correlations are unclear currently and the theoretical interpretations are required.

  16. MODEL RADIOACTIVE RADON DECAY

    Directory of Open Access Journals (Sweden)

    R.I. Parovik

    2012-06-01

    Full Text Available In a model of radioactive decay of radon in the sample (222Rn. The model assumes that the probability of the decay of radon and its half-life depends on the fractal properties of the geological environment. The dependencies of the decay parameters of the fractal dimension of the medium.

  17. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    Science.gov (United States)

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  18. Non-accretive Schrodinger operators and exponential decay of their eigenfunctions

    Czech Academy of Sciences Publication Activity Database

    Krejčiřík, David; Raymond, N.; Royer, J.; Siegl, Petr

    2017-01-01

    Roč. 221, č. 2 (2017), s. 779-802 ISSN 0021-2172 R&D Projects: GA ČR(CZ) GA14-06818S Institutional support: RVO:61389005 Keywords : non-self-adjoint electromagnetic Schrodinger operators * Dirichlet realisation * Agmon-type exponential decay Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.796, year: 2016

  19. Exponential rate of correlation decay for characters in a three-parameter class of toral skew endomorphisms

    International Nuclear Information System (INIS)

    Siboni, S.

    1998-01-01

    A detailed analysis of the correlation decay for characters in a three-parameter class of mappings of the 2-torus onto itself is presented. Being these mappings the natural extension of toral transformations previously considered with regard to a model of modulated diffusion, they show the structure of a skew product between the Bernoulli endomorphism B p (x) and a translation on T 1 . The family of characters for which correlation decay occurs is fully characterized for any choice of the parameters, and the decay is proved to be exponential, with a rate analytically computable. This improves a previous result by W. Parry, provides a lower bound to the spectral radius of a Perron-Frobenius operator introduced by the same author in his proof and answers positively to the conjecture that the poorest is the rational approximation of the coupling parameter of the map the fastest is the decay rate

  20. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    Science.gov (United States)

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  1. The true quantum face of the "exponential" decay: Unstable systems in rest and in motion

    Science.gov (United States)

    Urbanowski, K.

    2017-12-01

    Results of theoretical studies and numerical calculations presented in the literature suggest that the survival probability P0(t) has the exponential form starting from times much smaller than the lifetime τ up to times t ⪢τ and that P0(t) exhibits inverse power-law behavior at the late time region for times longer than the so-called crossover time T ⪢ τ (The crossover time T is the time when the late time deviations of P0(t) from the exponential form begin to dominate). More detailed analysis of the problem shows that in fact the survival probability P0(t) can not take the pure exponential form at any time interval including times smaller than the lifetime τ or of the order of τ and it has has an oscillating form. We also study the survival probability of moving relativistic unstable particles with definite momentum . These studies show that late time deviations of the survival probability of these particles from the exponential-like form of the decay law, that is the transition times region between exponential-like and non-exponential form of the survival probability, should occur much earlier than it follows from the classical standard considerations.

  2. Transient accelerating scalar models with exponential potentials

    International Nuclear Information System (INIS)

    Cui Wen-Ping; Zhang Yang; Fu Zheng-Wen

    2013-01-01

    We study a known class of scalar dark energy models in which the potential has an exponential term and the current accelerating era is transient. We find that, although a decelerating era will return in the future, when extrapolating the model back to earlier stages (z ≳ 4), scalar dark energy becomes dominant over matter. So these models do not have the desired tracking behavior, and the predicted transient period of acceleration cannot be adopted into the standard scenario of the Big Bang cosmology. When couplings between the scalar field and matter are introduced, the models still have the same problem; only the time when deceleration returns will be varied. To achieve re-deceleration, one has to turn to alternative models that are consistent with the standard Big Bang scenario.

  3. Methods for the analysis of complex fluorescence decays: sum of Becquerel functions versus sum of exponentials

    International Nuclear Information System (INIS)

    Menezes, Filipe; Fedorov, Alexander; Baleizão, Carlos; Berberan-Santos, Mário N; Valeur, Bernard

    2013-01-01

    Ensemble fluorescence decays are usually analyzed with a sum of exponentials. However, broad continuous distributions of lifetimes, either unimodal or multimodal, occur in many situations. A simple and flexible fitting function for these cases that encompasses the exponential is the Becquerel function. In this work, the applicability of the Becquerel function for the analysis of complex decays of several kinds is tested. For this purpose, decays of mixtures of four different fluorescence standards (binary, ternary and quaternary mixtures) are measured and analyzed. For binary and ternary mixtures, the expected sum of narrow distributions is well recovered from the Becquerel functions analysis, if the correct number of components is used. For ternary mixtures, however, satisfactory fits are also obtained with a number of Becquerel functions smaller than the true number of fluorophores in the mixture, at the expense of broadening the lifetime distributions of the fictitious components. The quaternary mixture studied is well fitted with both a sum of three exponentials and a sum of two Becquerel functions, showing the inevitable loss of information when the number of components is large. Decays of a fluorophore in a heterogeneous environment, known to be represented by unimodal and broad continuous distributions (as previously obtained by the maximum entropy method), are also measured and analyzed. It is concluded that these distributions can be recovered by the Becquerel function method with an accuracy similar to that of the much more complex maximum entropy method. It is also shown that the polar (or phasor) plot is not always helpful for ascertaining the degree (and kind) of complexity of a fluorescence decay. (paper)

  4. Exponential Decay Metrics of Topical Tetracaine Hydrochloride Administration Describe Corneal Anesthesia Properties Mechanistically.

    Science.gov (United States)

    Ethington, Jason; Goldmeier, David; Gaynes, Bruce I

    2017-03-01

    To identify pharmacodynamic (PD) and pharmacokinetic (PK) metrics that aid in mechanistic understanding of dosage considerations for prolonged corneal anesthesia. A rabbit model using 0.5% tetracaine hydrochloride was used to induce corneal anesthesia in conjunction with Cochet-Bonnet anesthesiometry. Metrics were derived describing PD-PK parameters of the time-dependent domain of recovery in corneal sensitivity. Curve fitting used a 1-phase exponential dissociation paradigm assuming a 1-compartment PK model. Derivation of metrics including half-life and mean ligand residence time, tau (τ), was predicted by nonlinear regression. Bioavailability was determined by area under the curve of the dose-response relationship with varying drop volumes. Maximal corneal anesthesia maintained a plateau with a recovery inflection at the approximate time of predicted corneal drug half-life. PDs of recovery of corneal anesthesia were consistent with a first-order drug elimination rate. The mean ligand residence time (tau, τ) was 41.7 minutes, and half-life was 28.89 minutes. The mean estimated corneal elimination rate constant (ke) was 0.02402 minute. Duration of corneal anesthesia ranged from 55 to 58 minutes. There was no difference in time domain PD area under the curve between drop volumes. Use of a small drop volume of a topical anesthetic (as low as 11 μL) is bioequivalent to conventional drop size and seems to optimize dosing regiments with a little effect on ke. Prolongation of corneal anesthesia may therefore be best achieved with administration of small drop volumes at time intervals corresponding to the half-life of drug decay from the corneal compartment.

  5. TRANSMUTED EXPONENTIATED EXPONENTIAL DISTRIBUTION

    OpenAIRE

    MEROVCI, FATON

    2013-01-01

    In this article, we generalize the exponentiated exponential distribution using the quadratic rank transmutation map studied by Shaw etal. [6] to develop a transmuted exponentiated exponential distribution. Theproperties of this distribution are derived and the estimation of the model parameters is discussed. An application to real data set are finally presented forillustration

  6. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    Science.gov (United States)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  7. The splitted laser beam filamentation in interaction of laser and an exponential decay inhomogeneous underdense plasma

    International Nuclear Information System (INIS)

    Xia Xiongping; Yi Lin; Xu Bin; Lu Jianduo

    2011-01-01

    The splitted beam filamentation in interaction of laser and an exponential decay inhomogeneous underdense plasma is investigated. Based on Wentzel-Kramers-Brillouin (WKB) approximation and paraxial/nonparaxial ray theory, simulation results show that the steady beam width and single beam filamentation along the propagation distance in paraxial case is due to the influence of ponderomotive nonlinearity. In nonparaxial case, the influence of the off-axial of α 00 and α 02 (the departure of the beam from the Gaussian nature) and S 02 (the departure from the spherical nature) results in more complicated ponderomotive nonlinearity and changing of the channel density and refractive index, which led to the formation of two/three splitted beam filamentation and the self-distortion of beam width. In addition, influence of several parameters on two/three splitted beam filamentation is discussed.

  8. Localization in nonuniform media: Exponential decay of the late-time Ginzburg-Landau impulse response

    International Nuclear Information System (INIS)

    Smith, E.

    1998-01-01

    Instanton methods have been used, in the context of a classical Ginzburg-Landau field theory, to compute the averaged density of states and probability Green close-quote s function for electrons scattered by statistically uniform site energy perturbations. At tree level, all states below some critical energy appear localized, and all states above extended. The same methods are applied here to macroscopically nonuniform systems, for which it is shown that localized and extended states can be coupled through a tunneling barrier created by the instanton background. Both electronic and acoustic systems are considered. An incoherent exponential decay is predicted for the late-time impulse response in both cases, valid for long-wavelength nonuniformity, and scaling relations are derived for the decay time constant as a function of energy or frequency and spatial dimension. The acoustic results are found to lie within a range of scaling relations obtained empirically from measurements of seismic coda, suggesting a connection between the universal properties of localization and the robustness of the observed scaling. The relation of instantons to the acoustic coherent-potential approximation is demonstrated in the recovery of the uniform limit. copyright 1998 The American Physical Society

  9. Exponential Decay Nonlinear Regression Analysis of Patient Survival Curves: Preliminary Assessment in Non-Small Cell Lung Cancer

    Science.gov (United States)

    Stewart, David J.; Behrens, Carmen; Roth, Jack; Wistuba, Ignacio I.

    2010-01-01

    Background For processes that follow first order kinetics, exponential decay nonlinear regression analysis (EDNRA) may delineate curve characteristics and suggest processes affecting curve shape. We conducted a preliminary feasibility assessment of EDNRA of patient survival curves. Methods EDNRA was performed on Kaplan-Meier overall survival (OS) and time-to-relapse (TTR) curves for 323 patients with resected NSCLC and on OS and progression-free survival (PFS) curves from selected publications. Results and Conclusions In our resected patients, TTR curves were triphasic with a “cured” fraction of 60.7% (half-life [t1/2] >100,000 months), a rapidly-relapsing group (7.4%, t1/2=5.9 months) and a slowly-relapsing group (31.9%, t1/2=23.6 months). OS was uniphasic (t1/2=74.3 months), suggesting an impact of co-morbidities; hence, tumor molecular characteristics would more likely predict TTR than OS. Of 172 published curves analyzed, 72 (42%) were uniphasic, 92 (53%) were biphasic, 8 (5%) were triphasic. With first-line chemotherapy in advanced NSCLC, 87.5% of curves from 2-3 drug regimens were uniphasic vs only 20% of those with best supportive care or 1 drug (p<0.001). 54% of curves from 2-3 drug regimens had convex rapid-decay phases vs 0% with fewer agents (p<0.001). Curve convexities suggest that discontinuing chemotherapy after 3-6 cycles “synchronizes” patient progression and death. With postoperative adjuvant chemotherapy, the PFS rapid-decay phase accounted for a smaller proportion of the population than in controls (p=0.02) with no significant difference in rapid-decay t1/2, suggesting adjuvant chemotherapy may move a subpopulation of patients with sensitive tumors from the relapsing group to the cured group, with minimal impact on time to relapse for a larger group of patients with resistant tumors. In untreated patients, the proportion of patients in the rapid-decay phase increased (p=0.04) while rapid-decay t1/2 decreased (p=0.0004) with increasing

  10. Testing the count rate performance of the scintillation camera by exponential attenuation: Decaying source; Multiple filters

    International Nuclear Information System (INIS)

    Adams, R.; Mena, I.

    1988-01-01

    An algorithm and two fortrAN programs have been developed to evaluate the count rate performance of scintillation cameras from count rates reduced exponentially, either by a decaying source or by filtration. The first method is used with short-lived radionuclides such as 191 /sup m/Ir or 191 /sup m/Au. The second implements a National Electrical Manufacturers' Association (NEMA) protocol in which the count rate from a source of 191 /sup m/Tc is attenuated by a varying number of copper filters stacked over it. The count rate at each data point is corrected for deadtime loss after assigning an arbitrary deadtime (tau). A second-order polynomial equation is fitted to the logarithms of net count rate values: ln(R) = A+BT+CT 2 where R is the net corrected count rate (cps), and T is the elapsed time (or the filter thickness in the NEMA method). Depending on C, tau is incremented or decremented iteratively, and the count rate corrections and curve fittings are repeated until C approaches zero, indicating a correct value of the deadtime (tau). The program then plots the measured count rate versus the corrected count rate values

  11. EXPALS, Least Square Fit of Linear Combination of Exponential Decay Function

    International Nuclear Information System (INIS)

    Douglas Gardner, C.

    1980-01-01

    1 - Description of problem or function: This program fits by least squares a function which is a linear combination of real exponential decay functions. The function is y(k) = summation over j of a(j) * exp(-lambda(j) * k). Values of the independent variable (k) and the dependent variable y(k) are specified as input data. Weights may be specified as input information or set by the program (w(k) = 1/y(k)). 2 - Method of solution: The Prony-Householder iteration method is used. For unequally-spaced data, a number of interpolation options are provided. This revision includes an option to call a differential correction subroutine REFINE to improve the approximation to unequally-spaced data when equal-interval interpolation is faulty. If convergence is achieved, the probable errors in the computed parameters are calculated also. 3 - Restrictions on the complexity of the problem: Generally, it is desirable to have at least 10n observations where n equals the number of terms and to input k+n significant figures if k significant figures are expected

  12. The photoluminescence of a fluorescent lamp: didactic experiments on the exponential decay

    Science.gov (United States)

    Onorato, Pasquale; Gratton, Luigi; Malgieri, Massimiliano; Oss, Stefano

    2017-01-01

    The lifetimes of the photoluminescent compounds contained in the coating of fluorescent compact lamps are usually measured using specialised instruments, including pulsed lasers and/or spectrofluorometers. Here we discuss how some low cost apparatuses, based on the use of either sensors for the educational lab or commercial digital photo cameras, can be employed to the same aim. The experiments do not require that luminescent phosphors are hazardously extracted from the compact fluorescent lamp, that also contains mercury. We obtain lifetime measurements for specific fluorescent elements of the bulb coating, in good agreement with the known values. We also address the physical mechanisms on which fluorescence lamps are based in a simplified way, suitable for undergraduate students; and we discuss in detail the physics of the lamp switch-off by analysing the time dependent spectrum, measured through a commercial fiber-optic spectrometer. Since the experiment is not hazardous in any way, requires a simple setup up with instruments which are commonly found in educational labs, and focuses on the typical features of the exponential decay, it is suitable for being performed in the undergraduate laboratory.

  13. Single exponential decay waveform; a synergistic combination of electroporation and electrolysis (E2 for tissue ablation

    Directory of Open Access Journals (Sweden)

    Nina Klein

    2017-04-01

    Full Text Available Background Electrolytic ablation and electroporation based ablation are minimally invasive, non-thermal surgical technologies that employ electrical currents and electric fields to ablate undesirable cells in a volume of tissue. In this study, we explore the attributes of a new tissue ablation technology that simultaneously delivers a synergistic combination of electroporation and electrolysis (E2. Method A new device that delivers a controlled dose of electroporation field and electrolysis currents in the form of a single exponential decay waveform (EDW was applied to the pig liver, and the effect of various parameters on the extent of tissue ablation was examined with histology. Results Histological analysis shows that E2 delivered as EDW can produce tissue ablation in volumes of clinical significance, using electrical and temporal parameters which, if used in electroporation or electrolysis separately, cannot ablate the tissue. Discussion The E2 combination has advantages over the three basic technologies of non-thermal ablation: electrolytic ablation, electrochemical ablation (reversible electroporation with injection of drugs and irreversible electroporation. E2 ablates clinically relevant volumes of tissue in a shorter period of time than electrolysis and electroporation, without the need to inject drugs as in reversible electroporation or use paralyzing anesthesia as in irreversible electroporation.

  14. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  15. A cluster expansion approach to exponential random graph models

    International Nuclear Information System (INIS)

    Yin, Mei

    2012-01-01

    The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region

  16. Exponential models applied to automated processing of radioimmunoassay standard curves

    International Nuclear Information System (INIS)

    Morin, J.F.; Savina, A.; Caroff, J.; Miossec, J.; Legendre, J.M.; Jacolot, G.; Morin, P.P.

    1979-01-01

    An improved computer processing is described for fitting of radio-immunological standard curves by means of an exponential model on a desk-top calculator. This method has been applied to a variety of radioassays and the results are in accordance with those obtained by more sophisticated models [fr

  17. Exploring parameter constraints on quintessential dark energy: The exponential model

    International Nuclear Information System (INIS)

    Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael

    2008-01-01

    We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ

  18. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  19. Quantum decay model with exact explicit analytical solution

    Science.gov (United States)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  20. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  1. CMB constraints on β-exponential inflationary models

    Science.gov (United States)

    Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.

    2018-03-01

    We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.

  2. Non-exponential extinction of radiation by fractional calculus modelling

    International Nuclear Information System (INIS)

    Casasanta, G.; Ciani, D.; Garra, R.

    2012-01-01

    Possible deviations from exponential attenuation of radiation in a random medium have been recently studied in several works. These deviations from the classical Beer-Lambert law were justified from a stochastic point of view by Kostinski (2001) . In his model he introduced the spatial correlation among the random variables, i.e. a space memory. In this note we introduce a different approach, including a memory formalism in the classical Beer-Lambert law through fractional calculus modelling. We find a generalized Beer-Lambert law in which the exponential memoryless extinction is only a special case of non-exponential extinction solutions described by Mittag-Leffler functions. We also justify this result from a stochastic point of view, using the space fractional Poisson process. Moreover, we discuss some concrete advantages of this approach from an experimental point of view, giving an estimate of the deviation from exponential extinction law, varying the optical depth. This is also an interesting model to understand the meaning of fractional derivative as an instrument to transmit randomness of microscopic dynamics to the macroscopic scale.

  3. Neutron decay, semileptonic hyperon decay and the Cabibbo model

    International Nuclear Information System (INIS)

    Siebert, H.W.

    1989-01-01

    The decay rates and formfactor ratios of neutron decay and semileptonic hyperon decays are compared in the framework of the Cabibbo model. The results indicate SU(3) symmetry breaking. The Kobayashi-Maskawa matrix element V us determined from these decays is in good agreement with the value determined from K→πeν decays, and with unitarity of the KM-matrix. (orig.)

  4. Exponential GARCH Modeling with Realized Measures of Volatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Huang, Zhuo

    returns and volatility. We apply the model to DJIA stocks and an exchange traded fund that tracks the S&P 500 index and find that specifications with multiple realized measures dominate those that rely on a single realized measure. The empirical analysis suggests some convenient simplifications......We introduce the Realized Exponential GARCH model that can utilize multiple realized volatility measures for the modeling of a return series. The model specifies the dynamic properties of both returns and realized measures, and is characterized by a flexible modeling of the dependence between...

  5. The Use of Modeling Approach for Teaching Exponential Functions

    Science.gov (United States)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  6. Dark energy exponential potential models as curvature quintessence

    International Nuclear Information System (INIS)

    Capozziello, S; Cardone, V F; Piedipalumbo, E; Rubano, C

    2006-01-01

    It has been recently shown that, under some general conditions, it is always possible to find a fourth-order gravity theory capable of reproducing the same dynamics as a given dark energy model. Here, we discuss this approach for a dark energy model with a scalar field evolving under the action of an exponential potential. In the absence of matter, such a potential can be recovered from a fourth-order theory via a conformal transformation. Including the matter term, the function f(R) entering the generalized gravity Lagrangian can be reconstructed according to the dark energy model

  7. Galilean invariance in the exponential model of atomic collisions

    International Nuclear Information System (INIS)

    del Pozo, A.; Riera, A.; Yaez, M.

    1986-01-01

    Using the X/sup n/ + (1s 2 )+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He + (1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed

  8. Analysis of projectile motion: A comparative study using fractional operators with power law, exponential decay and Mittag-Leffler kernel

    Science.gov (United States)

    Gómez-Aguilar, J. F.; Escobar-Jiménez, R. F.; López-López, M. G.; Alvarado-Martínez, V. M.

    2018-03-01

    In this paper, the two-dimensional projectile motion was studied; for this study two cases were considered, for the first one, we considered that there is no air resistance and, for the second case, we considered a resisting medium k . The study was carried out by using fractional calculus. The solution to this study was obtained by using fractional operators with power law, exponential decay and Mittag-Leffler kernel in the range of γ \\in (0,1] . These operators were considered in the Liouville-Caputo sense to use physical initial conditions with a known physical interpretation. The range and the maximum height of the projectile were obtained using these derivatives. With the aim of exploring the validity of the obtained results, we compared our results with experimental data given in the literature. A multi-objective particle swarm optimization approach was used for generating Pareto-optimal solutions for the parameters k and γ for different fixed values of velocity v0 and angle θ . The results showed some relevant qualitative differences between the use of power law, exponential decay and Mittag-Leffler law.

  9. Measurement of reactivity in ADS reactors considering an exponential decay after an interruption in the external proton source

    Energy Technology Data Exchange (ETDEWEB)

    Henrice Junior, Edson; Gonçalves, Alessandro C., E-mail: ejunior@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Palma, Daniel A.P., E-mail: dapalma@cnen.gov.br [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro - RJ (Brazil)

    2017-07-01

    The online monitoring of reactivity in ADS reactors is of paramount importance for the operations of such systems. This work is dedicated to the prediction of reactivity from a decay of the neutron population after a pulse from the external source. For that, a pulse from an external source in an ADS reactor was simulated with Serpent Reactor Physics code. From the data obtained, it was possible to make an adjustment, based on a combination of exponentials. The coefficient of the exponential for the dominating term of the sum of exponentials is compared to the simplified solution of the neutron diffusion equation, thus obtaining the reactivity. The method used for the adjustment has the advantage of not requiring data that is equally spaced, and of being easily programmable, waiving the use of specific software for linear adjustments. The preliminary results of the research showed a 750 pcm deviation in relation to the value found of -3,630 pcm obtained through point kinetics, and as a result should be the object of further study. (author)

  10. Measurement of reactivity in ADS reactors considering an exponential decay after an interruption in the external proton source

    International Nuclear Information System (INIS)

    Henrice Junior, Edson; Gonçalves, Alessandro C.

    2017-01-01

    The online monitoring of reactivity in ADS reactors is of paramount importance for the operations of such systems. This work is dedicated to the prediction of reactivity from a decay of the neutron population after a pulse from the external source. For that, a pulse from an external source in an ADS reactor was simulated with Serpent Reactor Physics code. From the data obtained, it was possible to make an adjustment, based on a combination of exponentials. The coefficient of the exponential for the dominating term of the sum of exponentials is compared to the simplified solution of the neutron diffusion equation, thus obtaining the reactivity. The method used for the adjustment has the advantage of not requiring data that is equally spaced, and of being easily programmable, waiving the use of specific software for linear adjustments. The preliminary results of the research showed a 750 pcm deviation in relation to the value found of -3,630 pcm obtained through point kinetics, and as a result should be the object of further study. (author)

  11. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  12. Exponential decay of GC content detected by strand-symmetric substitution rates influences the evolution of isochore structure.

    Science.gov (United States)

    Karro, J E; Peifer, M; Hardison, R C; Kollmann, M; von Grünberg, H H

    2008-02-01

    The distribution of guanine and cytosine nucleotides throughout a genome, or the GC content, is associated with numerous features in mammals; understanding the pattern and evolutionary history of GC content is crucial to our efforts to annotate the genome. The local GC content is decaying toward an equilibrium point, but the causes and rates of this decay, as well as the value of the equilibrium point, remain topics of debate. By comparing the results of 2 methods for estimating local substitution rates, we identify 620 Mb of the human genome in which the rates of the various types of nucleotide substitutions are the same on both strands. These strand-symmetric regions show an exponential decay of local GC content at a pace determined by local substitution rates. DNA segments subjected to higher rates experience disproportionately accelerated decay and are AT rich, whereas segments subjected to lower rates decay more slowly and are GC rich. Although we are unable to draw any conclusions about causal factors, the results support the hypothesis proposed by Khelifi A, Meunier J, Duret L, and Mouchiroud D (2006. GC content evolution of the human and mouse genomes: insights from the study of processed pseudogenes in regions of different recombination rates. J Mol Evol. 62:745-752.) that the isochore structure has been reshaped over time. If rate variation were a determining factor, then the current isochore structure of mammalian genomes could result from the local differences in substitution rates. We predict that under current conditions strand-symmetric portions of the human genome will stabilize at an average GC content of 30% (considerably less than the current 42%), thus confirming that the human genome has not yet reached equilibrium.

  13. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  14. Exponential decay activities of radiocesium In mushrooms by the help of acetic acid

    International Nuclear Information System (INIS)

    Kunova, V.; Dvorak, P.; Benova, K.

    2004-01-01

    Gross activity of radiocesium in food from environmental ecosystems is decreasing slower than it was supposed and therefore it is subject for public repeatedly. Belong there mushrooms, game and wood fruits. Interest in this problems is and substantial improvement tighten up admissible levels of radioactive contamination of food ( 137 Cs and 134 Cs) for irradiation after Chernobyl in public notice for Czech republic is 600 Bq/kg. It is in unity with European Union. We can search possibilities to decrease content of radiocesium in food. Mainly mushrooms cumulate considerable quantity of radiocesium. Were examined samples Boletus badius of three other condition. Samples come from two other localities. Activity of radiocesium was detected by gamma-spectrometry (f.Canberra). For decrease content of radiocesium was using elution in 2% solution of acetate acid. Curve of graphic analysis have exponential nature. (authors)

  15. Exponential random graph models for networks with community structure.

    Science.gov (United States)

    Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian

    2013-09-01

    Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.

  16. Contribution of mono-exponential, bi-exponential and stretched exponential model-based diffusion-weighted MR imaging in the diagnosis and differentiation of uterine cervical carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Meng; Yu, Xiaoduo; Chen, Yan; Ouyang, Han; Zhou, Chunwu [Chinese Academy of Medical Sciences, Department of Diagnostic Radiology, Cancer Institute and Hospital, Peking Union Medical College, Beijing (China); Wu, Bing; Zheng, Dandan [GE MR Research China, Beijing (China)

    2017-06-15

    To investigate the potential of various metrics derived from mono-exponential model (MEM), bi-exponential model (BEM) and stretched exponential model (SEM)-based diffusion-weighted imaging (DWI) in diagnosing and differentiating the pathological subtypes and grades of uterine cervical carcinoma. 71 newly diagnosed patients with cervical carcinoma (50 cases of squamous cell carcinoma [SCC] and 21 cases of adenocarcinoma [AC]) and 32 healthy volunteers received DWI with multiple b values. The apparent diffusion coefficient (ADC), pure molecular diffusion (D), pseudo-diffusion coefficient (D*), perfusion fraction (f), water molecular diffusion heterogeneity index (alpha), and distributed diffusion coefficient (DDC) were calculated and compared between tumour and normal cervix, among different pathological subtypes and grades. All of the parameters were significantly lower in cervical carcinoma than normal cervical stroma except alpha. SCC showed lower ADC, D, f and DDC values and higher D* value than AC; D and DDC values of SCC and ADC and D values of AC were lower in the poorly differentiated group than those in the well-moderately differentiated group. Compared with MEM, diffusion parameters from BEM and SEM may offer additional information in cervical carcinoma diagnosis, predicting pathological tumour subtypes and grades, while f and D showed promising significance. (orig.)

  17. Galilean invariance in the exponential model of atomic collisions

    Energy Technology Data Exchange (ETDEWEB)

    del Pozo, A.; Riera, A.; Yaez, M.

    1986-11-01

    Using the X/sup n//sup +/(1s/sup 2/)+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He/sup +/(1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed.

  18. Exponential critical-state model for magnetization of hard superconductors

    International Nuclear Information System (INIS)

    Chen, D.; Sanchez, A.; Munoz, J.S.

    1990-01-01

    We have calculated the initial magnetization curves and hysteresis loops for hard type-II superconductors based on the exponential-law model, J c (H i ) =k exp(-|H i |/H 0 ), where k and H 0 are constants. After discussing the general behavior of penetrated supercurrents in an infinitely long column specimen, we define a general cross-sectional shape based on two equal circles of radius a, which can be rendered into a circle, a rectangle, or many other shapes. With increasing parameter p (=ka/H 0 ), the computed M-H curves show obvious differences with those computed from Kim's model and approach the results of a simple infinitely narrow square pulse J c (H i ). For high-T c superconductors, our results can be applied to the study of the magnetic properties and the critical-current density of single crystals, as well as to the determination of the intergranular critical-current density from magnetic measurements

  19. Evaluation of multi-exponential curve fitting analysis of oxygen-quenched phosphorescence decay traces for recovering microvascular oxygen tension histograms

    NARCIS (Netherlands)

    Bezemer, Rick; Faber, Dirk J.; Almac, Emre; Kalkman, Jeroen; Legrand, Matthieu; Heger, Michal; Ince, Can

    2010-01-01

    Although it is generally accepted that oxygen-quenched phosphorescence decay traces can be analyzed using the exponential series method (ESM), its application until now has been limited to a few (patho)physiological studies, probably because the reliability of the recovered oxygen tension (pO(2))

  20. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    International Nuclear Information System (INIS)

    Baidillah, Marlin R; Takei, Masahiro

    2017-01-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution. (paper)

  1. An empirical test of pseudo random number generators by means of an exponential decaying process; Una prueba empirica de generadores de numeros pseudoaleatorios mediante un proceso de decaimiento exponencial

    Energy Technology Data Exchange (ETDEWEB)

    Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A. [Facultad de Fisica e Inteligencia Artificial, Universidad Veracruzana, A.P. 475, Xalapa, Veracruz (Mexico); Mora F, L.E. [CIMAT, A.P. 402, 36000 Guanajuato (Mexico)]. e-mail: hcoronel@uv.mx

    2007-07-01

    Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)

  2. The Exponential Distribution and the Application to Markov Models ...

    African Journals Online (AJOL)

    ... are close to zero, and very long times are increasingly unlikely. That is, the most likely values are considered to be clustered about the mean, and large deviations from the mean are viewed as increasingly unlike. If this characteristic of the negative exponential distribution seems incompatible with the application one has ...

  3. Focus Article: Oscillatory and long-range monotonic exponential decays of electrostatic interactions in ionic liquids and other electrolytes: The significance of dielectric permittivity and renormalized charges

    Science.gov (United States)

    Kjellander, Roland

    2018-05-01

    A unified treatment of oscillatory and monotonic exponential decays of interactions in electrolytes is displayed, which highlights the role of dielectric response of the fluid in terms of renormalized (effective) dielectric permittivity and charges. An exact, but physically transparent statistical mechanical formalism is thereby used, which is presented in a systematic, pedagogical manner. Both the oscillatory and monotonic behaviors are given by an equation for the decay length of screened electrostatic interactions that is very similar to the classical expression for the Debye length. The renormalized dielectric permittivities, which have similar roles for electrolytes as the dielectric constant has for pure polar fluids, consist in general of several entities with different physical meanings. They are connected to dielectric response of the fluid on the same length scale as the decay length of the screened interactions. Only in cases where the decay length is very long, these permittivities correspond approximately to a dielectric response in the long-wavelength limit, like the dielectric constant for polar fluids. Experimentally observed long-range exponentially decaying surface forces are analyzed as well as the oscillatory forces observed for short to intermediate surface separations. Both occur in some ionic liquids and in concentrated as well as very dilute electrolyte solutions. The coexisting modes of decay are in general determined by the bulk properties of the fluid and not by the solvation of the surfaces; in the present cases, they are given by the behavior of the screened Coulomb interaction of the bulk fluid. The surface-fluid interactions influence the amplitudes and signs or phases of the different modes of the decay, but not their decay lengths and wavelengths. The similarities between some ionic liquids and very dilute electrolyte solutions as regards both the long-range monotonic and the oscillatory decays are analyzed.

  4. Recent developments in exponential random graph (p*) models for social networks

    NARCIS (Netherlands)

    Robins, Garry; Snijders, Tom; Wang, Peng; Handcock, Mark; Pattison, Philippa

    This article reviews new specifications for exponential random graph models proposed by Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handcock, M., 2006. New specifications for exponential random graph models. Sociological Methodology] and demonstrates their improvement over

  5. Vector meson decays in the chiral bag model

    International Nuclear Information System (INIS)

    Maxwell, O.V.; Jennings, B.K.

    1985-01-01

    Vector meson decays are examined in a model where a confined quark and antiquark annihilate, producing a pair of elementary pseudoscalar mesons. Two versions of the pseudoscalar meson-quark interaction are employed, one where the coupling is restricted to the bag surface and one where it extends throughout the bag volume. Energy conservation is ensured in the model through insertion of exponential factors containing the bag energy at each interaction vertex. To guarantee momentum conservation, a wave-packet description is utilized in which the decay widths are normalized by a factor involving the overlap of the initial bag state with the confined qanti q state of zero momentum. With either interaction, the model yields a value for the p-width that exceeds the empirical width by a factor two. For the Ksup(*) and PHI mesons, the computed widths depend strongly on the interaction employed. Implications of these results for chiral bag models are discussed. (orig.)

  6. Electrostatic screening in classical Coulomb fluids: exponential or power-law decay or both? An investigation into the effect of dispersion interactions

    International Nuclear Information System (INIS)

    Kjellander, Roland

    2006-01-01

    It is shown that the nature of the non-electrostatic part of the pair interaction potential in classical Coulomb fluids can have a profound influence on the screening behaviour. Two cases are compared: (i) when the non-electrostatic part equals an arbitrary finite-ranged interaction and (ii) when a dispersion r -6 interaction potential is included. A formal analysis is done in exact statistical mechanics, including an investigation of the bridge function. It is found that the Coulombic r -1 and the dispersion r -6 potentials are coupled in a very intricate manner as regards the screening behaviour. The classical one-component plasma (OCP) is a particularly clear example due to its simplicity and is investigated in detail. When the dispersion r -6 potential is turned on, the screened electrostatic potential from a particle goes from a monotonic exponential decay, exp(-κr)/r, to a power-law decay, r -8 , for large r. The pair distribution function acquire, at the same time, an r -10 decay for large r instead of the exponential one. There still remains exponentially decaying contributions to both functions, but these contributions turn oscillatory when the r -6 interaction is switched on. When the Coulomb interaction is turned off but the dispersion r -6 pair potential is kept, the decay of the pair distribution function for large r goes over from the r -10 to an r -6 behaviour, which is the normal one for fluids of electroneutral particles with dispersion interactions. Differences and similarities compared to binary electrolytes are pointed out

  7. Esscher transforms and the minimal entropy martingale measure for exponential Lévy models

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Sgarra, C.

    In this paper we offer a systematic survey and comparison of the Esscher martingale transform for linear processes, the Esscher martingale transform for exponential processes, and the minimal entropy martingale measure for exponential lévy models and present some new results in order to give...

  8. Meson decays in a quark model

    International Nuclear Information System (INIS)

    Roberts, W.; Silvestre-Brac, B.

    1998-01-01

    A recent model of hadron states is extended to include meson decays. We find that the overall success of the model is quite good. Possible improvements to the model are suggested. copyright 1997 The American Physical Society

  9. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon; Liang, Faming; Yuan, Ying

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we

  10. Δ-decay in the Skyrme model

    International Nuclear Information System (INIS)

    Verschelde, H.

    1988-01-01

    The Δ-decay matrix element is calculated while carefully paying attention to ordering problems. The decay width obtained is too large by a factor of four. Arguments are given that this discrepancy is not a defect of the Skyrme model but a consequence of the rigid rotor quantization. (orig.)

  11. Decay rates of quarkonia and potential models

    International Nuclear Information System (INIS)

    Rai, Ajay Kumar; Pandya, J N; Vinodkumar, P C

    2005-01-01

    The decay rates of cc-bar and b-barb mesons have been studied with contributions from different correction terms. The corrections based on hard processes involved in the decays are quantitatively studied in the framework of different phenomenological potential models

  12. Double beta decay and neutrino mass models

    Energy Technology Data Exchange (ETDEWEB)

    Helo, J.C. [Universidad Técnica Federico Santa María, Centro-Científico-Tecnológico de Valparaíso, Casilla 110-V, Valparaíso (Chile); Hirsch, M. [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Ota, T. [Department of Physics, Saitama University, Shimo-Okubo 255, 338-8570 Saitama-Sakura (Japan); Santos, F.A. Pereira dos [Departamento de Física, Pontifícia Universidade Católica do Rio de Janeiro,Rua Marquês de São Vicente 225, 22451-900 Gávea, Rio de Janeiro (Brazil)

    2015-05-19

    Neutrinoless double beta decay allows to constrain lepton number violating extensions of the standard model. If neutrinos are Majorana particles, the mass mechanism will always contribute to the decay rate, however, it is not a priori guaranteed to be the dominant contribution in all models. Here, we discuss whether the mass mechanism dominates or not from the theory point of view. We classify all possible (scalar-mediated) short-range contributions to the decay rate according to the loop level, at which the corresponding models will generate Majorana neutrino masses, and discuss the expected relative size of the different contributions to the decay rate in each class. Our discussion is general for models based on the SM group but does not cover models with an extended gauge. We also work out the phenomenology of one concrete 2-loop model in which both, mass mechanism and short-range diagram, might lead to competitive contributions, in some detail.

  13. Magnetic field decay in model SSC dipoles

    International Nuclear Information System (INIS)

    Gilbert, W.S.; Althaus, R.F.; Barale, P.J.; Benjegerdes, R.W.; Green, M.A.; Green, M.I.; Scanlan, R.M.

    1988-08-01

    We have observed that some of our model SSC dipoles have long time constant decays of the magnetic field harmonics with amplitudes large enough to result in significant beam loss, if they are not corrected. The magnets were run at constant current at the SSC injection field level of 0.3 tesla for one to three hours and changes in the magnetic field were observed. One explanation for the observed field decay is time dependent superconductor magnetization. Another explanation involves flux creep or flux flow. Data are presented on how the decay changes with previous flux history. Similar magnets with different Nb-Ti filament spacings and matrix materials have different long time field decay. A theoretical model using proximity coupling and flux creep for the observed field decay is discussed. 10 refs., 5 figs., 2 tabs

  14. Vacuum decay in a soluble model

    International Nuclear Information System (INIS)

    Camargo Filho, A.F. de; Shellard, R.C.; Marques, G.C.

    1983-03-01

    A field-theoretical model is studied, where the decay rate of the false vacuum can be computed up to the first quantum corrections in both the high-temperature and zero-temperature limits. It is found that the dependence of the decay rate on the height and width of the potential barrier does not follow the same simple area rule as in the quantum-mechanical case. Furthermore, its behaviour is strongly model-dependent. (Author) [pt

  15. A generalized exponential time series regression model for electricity prices

    DEFF Research Database (Denmark)

    Haldrup, Niels; Knapik, Oskar; Proietti, Tomasso

    on the estimated model, the best linear predictor is constructed. Our modeling approach provides good fit within sample and outperforms competing benchmark predictors in terms of forecasting accuracy. We also find that building separate models for each hour of the day and averaging the forecasts is a better...

  16. Separation of type and grade in cervical tumours using non-mono-exponential models of diffusion-weighted MRI

    International Nuclear Information System (INIS)

    Winfield, Jessica M.; Collins, David J.; Morgan, Veronica A.; DeSouza, Nandita M.; Orton, Matthew R.; Ind, Thomas E.J.; Attygalle, Ayoma; Hazell, Steve

    2017-01-01

    Assessment of empirical diffusion-weighted MRI (DW-MRI) models in cervical tumours to investigate whether fitted parameters distinguish between types and grades of tumours. Forty-two patients (24 squamous cell carcinomas, 14 well/moderately differentiated, 10 poorly differentiated; 15 adenocarcinomas, 13 well/moderately differentiated, two poorly differentiated; three rare types) were imaged at 3 T using nine b-values (0 to 800 s mm -2 ). Mono-exponential, stretched exponential, kurtosis, statistical, and bi-exponential models were fitted. Model preference was assessed using Bayesian Information Criterion analysis. Differences in fitted parameters between tumour types/grades and correlation between fitted parameters were assessed using two-way analysis of variance and Pearson's linear correlation coefficient, respectively. Non-mono-exponential models were preferred by 83 % of tumours with bi-exponential and stretched exponential models preferred by the largest numbers of tumours. Apparent diffusion coefficient (ADC) and diffusion coefficients from non-mono-exponential models were significantly lower in poorly differentiated tumours than well/moderately differentiated tumours. α (stretched exponential), K (kurtosis), f and D* (bi-exponential) were significantly different between tumour types. Strong correlation was observed between ADC and diffusion coefficients from other models. Non-mono-exponential models were preferred to the mono-exponential model in DW-MRI data from cervical tumours. Parameters of non-mono-exponential models showed significant differences between types and grades of tumours. (orig.)

  17. Separation of type and grade in cervical tumours using non-mono-exponential models of diffusion-weighted MRI

    Energy Technology Data Exchange (ETDEWEB)

    Winfield, Jessica M.; Collins, David J.; Morgan, Veronica A.; DeSouza, Nandita M. [The Royal Marsden NHS Foundation Trust, MRI Unit, Sutton, Surrey (United Kingdom); The Institute of Cancer Research, Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, London (United Kingdom); Orton, Matthew R. [The Institute of Cancer Research, Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, London (United Kingdom); Ind, Thomas E.J. [The Royal Marsden NHS Foundation Trust, Gynaecology Unit, London (United Kingdom); Attygalle, Ayoma; Hazell, Steve [The Royal Marsden NHS Foundation Trust, Department of Histopathology, London (United Kingdom)

    2017-02-15

    Assessment of empirical diffusion-weighted MRI (DW-MRI) models in cervical tumours to investigate whether fitted parameters distinguish between types and grades of tumours. Forty-two patients (24 squamous cell carcinomas, 14 well/moderately differentiated, 10 poorly differentiated; 15 adenocarcinomas, 13 well/moderately differentiated, two poorly differentiated; three rare types) were imaged at 3 T using nine b-values (0 to 800 s mm{sup -2}). Mono-exponential, stretched exponential, kurtosis, statistical, and bi-exponential models were fitted. Model preference was assessed using Bayesian Information Criterion analysis. Differences in fitted parameters between tumour types/grades and correlation between fitted parameters were assessed using two-way analysis of variance and Pearson's linear correlation coefficient, respectively. Non-mono-exponential models were preferred by 83 % of tumours with bi-exponential and stretched exponential models preferred by the largest numbers of tumours. Apparent diffusion coefficient (ADC) and diffusion coefficients from non-mono-exponential models were significantly lower in poorly differentiated tumours than well/moderately differentiated tumours. α (stretched exponential), K (kurtosis), f and D* (bi-exponential) were significantly different between tumour types. Strong correlation was observed between ADC and diffusion coefficients from other models. Non-mono-exponential models were preferred to the mono-exponential model in DW-MRI data from cervical tumours. Parameters of non-mono-exponential models showed significant differences between types and grades of tumours. (orig.)

  18. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    Science.gov (United States)

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi

  19. Pair correlation function decay in models of simple fluids that contain dispersion interactions.

    Science.gov (United States)

    Evans, R; Henderson, J R

    2009-11-25

    We investigate the intermediate-and longest-range decay of the total pair correlation function h(r) in model fluids where the inter-particle potential decays as -r(-6), as is appropriate to real fluids in which dispersion forces govern the attraction between particles. It is well-known that such interactions give rise to a term in q(3) in the expansion of [Formula: see text], the Fourier transform of the direct correlation function. Here we show that the presence of the r(-6) tail changes significantly the analytic structure of [Formula: see text] from that found in models where the inter-particle potential is short ranged. In particular the pure imaginary pole at q = iα(0), which generates monotonic-exponential decay of rh(r) in the short-ranged case, is replaced by a complex (pseudo-exponential) pole at q = iα(0)+α(1) whose real part α(1) is negative and generally very small in magnitude. Near the critical point α(1)∼-α(0)(2) and we show how classical Ornstein-Zernike behaviour of the pair correlation function is recovered on approaching the mean-field critical point. Explicit calculations, based on the random phase approximation, enable us to demonstrate the accuracy of asymptotic formulae for h(r) in all regions of the phase diagram and to determine a pseudo-Fisher-Widom (pFW) line. On the high density side of this line, intermediate-range decay of rh(r) is exponentially damped-oscillatory and the ultimate long-range decay is power-law, proportional to r(-6), whereas on the low density side this damped-oscillatory decay is sub-dominant to both monotonic-exponential and power-law decay. Earlier analyses did not identify the pseudo-exponential pole and therefore the existence of the pFW line. Our results enable us to write down the generic wetting potential for a 'real' fluid exhibiting both short-ranged and dispersion interactions. The monotonic-exponential decay of correlations associated with the pseudo-exponential pole introduces additional terms into

  20. Rare top quark decays in extended models

    International Nuclear Information System (INIS)

    Gaitan, R.; Miranda, O. G.; Cabral-Rosetti, L. G.

    2006-01-01

    Flavor changing neutral currents (FCNC) decays t → H0 + c, t → Z + c, and H0 → t + c-bar are discussed in the context of Alternative Left-Right symmetric Models (ALRM) with extra isosinglet heavy fermions where FCNC decays may take place at tree-level and are only suppressed by the mixing between ordinary top and charm quarks, which is poorly constraint by current experimental values. The non-manifest case is also briefly discussed

  1. Top quark decays in extended models

    International Nuclear Information System (INIS)

    Gaitan, R.; Cabral-Rosetti, L.G.

    2011-01-01

    We evaluate the FCNC decays t → H 0 + c at tree-level and t → γ + c at one-loop level in the context of Alternative Left-Right symmetric Models (ALRM) with extra isosinglet heavy fermions; in the first case, FCNC decays occurs at tree-level and they are only suppressed by the mixing between ordinary top and charm quarks. (author)

  2. Modeling volatile organic compounds sorption on dry building materials using double-exponential model

    International Nuclear Information System (INIS)

    Deng, Baoqing; Ge, Di; Li, Jiajia; Guo, Yuan; Kim, Chang Nyung

    2013-01-01

    A double-exponential surface sink model for VOCs sorption on building materials is presented. Here, the diffusion of VOCs in the material is neglected and the material is viewed as a surface sink. The VOCs concentration in the air adjacent to the material surface is introduced and assumed to always maintain equilibrium with the material-phase concentration. It is assumed that the sorption can be described by mass transfer between the room air and the air adjacent to the material surface. The mass transfer coefficient is evaluated from the empirical correlation, and the equilibrium constant can be obtained by linear fitting to the experimental data. The present model is validated through experiments in small and large test chambers. The predicted results accord well with the experimental data in both the adsorption stage and desorption stage. The model avoids the ambiguity of model constants found in other surface sink models and is easy to scale up

  3. The Exponential Model for the Spectrum of a Time Series: Extensions and Applications

    DEFF Research Database (Denmark)

    Proietti, Tommaso; Luati, Alessandra

    The exponential model for the spectrum of a time series and its fractional extensions are based on the Fourier series expansion of the logarithm of the spectral density. The coefficients of the expansion form the cepstrum of the time series. After deriving the cepstrum of important classes of time...

  4. Solar system tests of scalar field models with an exponential potential

    International Nuclear Information System (INIS)

    Paramos, J.; Bertolami, O.

    2008-01-01

    We consider a scenario where the dynamics of a scalar field is ruled by an exponential potential, such as those arising from some quintessence-type models, and aim at obtaining phenomenological manifestations of this entity within our Solar System. To do so, we assume a perturbative regime, derive the perturbed Schwarzschild metric, and extract the relevant post-Newtonian parameters.

  5. Asymptotic Estimates of Gerber-Shiu Functions in the Renewal Risk Model with Exponential Claims

    Institute of Scientific and Technical Information of China (English)

    Li WEI

    2012-01-01

    This paper continues to study the asymptotic behavior of Gerber-Shiu expected discounted penalty functions in the renewal risk model as the initial capital becomes large.Under the assumption that the claim-size distribution is exponential,we establish an explicit asymptotic formula.Some straightforward consequences of this formula match existing results in the field.

  6. Exponential law as a more compatible model to describe orbits of planetary systems

    Directory of Open Access Journals (Sweden)

    M Saeedi

    2012-12-01

    Full Text Available   According to the Titus-Bode law, orbits of planets in the solar system obey a geometric progression. Many investigations have been launched to improve this law. In this paper, we apply square and exponential models to planets of solar system, moons of planets, and some extra solar systems, and compare them with each other.

  7. A model for dark energy decay

    Energy Technology Data Exchange (ETDEWEB)

    Abdalla, Elcio, E-mail: eabdalla@usp.br [Instituto de Física, Universidade de São Paulo, CP 66318, 05315-970, São Paulo (Brazil); Graef, L.L., E-mail: leilagraef@usp.br [Instituto de Física, Universidade de São Paulo, CP 66318, 05315-970, São Paulo (Brazil); Wang, Bin, E-mail: wang_b@sjtu.edu.cn [INPAC and Department of Physics, Shanghai Jiao Tong University, 200240 Shanghai (China)

    2013-11-04

    We discuss a model of nonperturbative decay of dark energy. We suggest the possibility that this model can provide a mechanism from the field theory to realize the energy transfer from dark energy into dark matter, which is the requirement to alleviate the coincidence problem. The advantage of the model is the fact that it accommodates a mean life compatible with the age of the universe. We also argue that supersymmetry is a natural set up, though not essential.

  8. On the violation of the exponential decay law in atomic physics: ab initio calculation of the time-dependence of the He-1s2p24P non-stationary state

    International Nuclear Information System (INIS)

    Nicolaides, C.A.; Mercouris, T.

    1996-01-01

    The detailed time dependence of the decay of a three-electron autoionizing state close to threshold has been obtained ab initio by solving the time-dependent Schrodinger equation (TDSE). The theory allows the definition and computation of energy-dependent matrix elements in terms of the appropriate N-electron wavefunctions, representing the localized initial state, Ψ O , the stationary scattering states of the continuous spectrum, U( e psilon ) , and the localized excited states, Ψ n , of the effective Hamiltonian QHQ, where Q ''ident to'' |Ψ O > O |. The time-dependent wavefunction is expanded over these states and the resulting coupled equations with time-dependent coefficients (in the thousands) are solved to all orders by a Taylor series expansion technique. The robustness of the method was verified by using a model interaction in analytic form and comparing the results from two different methods for integrating the TDSE (appendix B). For the physically relevant application, the chosen state was the He - 1s2p 24 P shape resonance, about which very accurate theoretical and experimental relevant information exists. Calculations using accurate wavefunctions and an energy grid of 20.000 points in the range 0.0-21.77 eV show that the effective interaction depends on energy in a state-specific manner, thereby leading to state-specific characteristics of non-exponential decay over about 6 x 10 4 au of time, from which a width of Γ = 5.2 meV and a lifetime of 1.26 x 10 -13 s is deduced. The results suggest that either in this state or in other autoionizing states close to threshold, NED may have sufficient presence to make the violation of the law of exponential decay observable. (Author)

  9. Firing patterns in the adaptive exponential integrate-and-fire model.

    Science.gov (United States)

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  10. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.

    Science.gov (United States)

    Gustman, Alan L; Steinmeier, Thomas L

    2012-06-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.

  11. Modeling of the pyrolysis of biomass under parabolic and exponential temperature increases using the Distributed Activation Energy Model

    International Nuclear Information System (INIS)

    Soria-Verdugo, Antonio; Goos, Elke; Arrieta-Sanagustín, Jorge; García-Hernando, Nestor

    2016-01-01

    Highlights: • Pyrolysis of biomass under parabolic and exponential temperature profiles is modeled. • The model is based on a simplified Distributed Activation Energy Model. • 4 biomasses are analyzed in TGA with parabolic and exponential temperature increases. • Deviations between the model prediction and TGA measurements are under 5 °C. - Abstract: A modification of the simplified Distributed Activation Energy Model is proposed to simulate the pyrolysis of biomass under parabolic and exponential temperature increases. The pyrolysis of pine wood, olive kernel, thistle flower and corncob was experimentally studied in a TGA Q500 thermogravimetric analyzer. The results of the measurements of nine different parabolic and exponential temperature increases for each sample were employed to validate the models proposed. The deviation between the experimental TGA measurements and the estimation of the reacted fraction during the pyrolysis of the four samples under parabolic and exponential temperature increases was lower than 5 °C for all the cases studied. The models derived in this work to describe the pyrolysis of biomass with parabolic and exponential temperature increases were found to be in good agreement with the experiments conducted in a thermogravimetric analyzer.

  12. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    Science.gov (United States)

    Strauß, Magdalena E; Mezzetti, Maura; Leorato, Samantha

    2017-05-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms.

  13. B decays and models for CP violation

    International Nuclear Information System (INIS)

    He, Xiao Gang

    1995-12-01

    The decay modes B to π π,υK S , K - D, πK and ηK are promising channels to study the unitarity triangle of the CP violating Cabibbo-Kobayashi-Maskawa (CKM) matrix. The consequences of these measurements in the Weinberg model are discussed. It is shown that measurements of CP violation in B decay can be used to distinguish Standard Model from Weinberg model and that the following different mechanisms for CP violation can be distinguished: 1) CP is violated in the CKM sector only; 2) CP is violated spontaneously in the Higgs sector only; and 3) CP is violated in both the CKM and Higgs sectors. 27 refs., 4 figs

  14. Lepton radiative decays in supersymmetric standard model

    International Nuclear Information System (INIS)

    Volkov, G.G.; Liparteliani, A.G.

    1988-01-01

    Radiative decays of charged leptons l i →l j γ(γ * ) have been discussed in the framework of the supersymmetric generalization of the standard model. The most general form of the formfactors for the one-loop vertex function is written. Decay widths of the mentioned radiative decays are calculated. Scalar lepton masses are estimated at the maximal mixing angle in the scalar sector proceeding from the present upper limit for the branching of the decay μ→eγ. In case of the maximal mixing angle and the least mass degeneration of scalar leptons of various generations the following lower limit for the scalar electron mass m e-tilde >1.5 TeV has been obtained. The mass of the scalar neutrino is 0(1) TeV, in case the charged calibrino is lighter than the scalar neutrino. The result obtained sensitive to the choice of the lepton mixing angle in the scalar sector, namely, in decreasing the value sin 2 θ by an order of magnitude, the limitation on the scalar electron mass may decrease more than 3 times. In the latter case the direct observation of electrons at the e + e - -collider (1x1 TeV) becomes available

  15. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  16. Terahertz double-exponential model for adsorption of volatile organic compounds in active carbon

    International Nuclear Information System (INIS)

    Zhu, Jing; Zhan, Honglei; Miao, Xinyang; Zhao, Kun; Zhou, Qiong

    2017-01-01

    In terms of the evaluation of the diffusion-controlled adsorption and diffused rate, a mathematical model was built on the basis of the double-exponential kinetics model and terahertz amplitude in this letter. The double-exponential-THz model described the two-step mechanism controlled by diffusion. A rapid step involves external and internal diffusion, followed by a slow step controlled by intraparticle diffusion. The concentration gradient of the molecules promoted the organic molecules rapidly diffusing to the external surface of adsorbent. The solute molecules then transferred across the liquid film. Intraparticle diffusion began and was determined by the molecular sizes, as well as affinities between organics and activated carbon. (paper)

  17. Intravoxel incoherent motion diffusion-weighted imaging in the liver: comparison of mono-, bi- and tri-exponential modelling at 3.0-T

    International Nuclear Information System (INIS)

    Cercueil, Jean-Pierre; Petit, Jean-Michel; Nougaret, Stephanie; Pierredon-Foulongne, Marie-Ange; Schembri, Valentina; Delhom, Elisabeth; Guiu, Boris; Soyer, Philippe; Fohlen, Audrey; Schmidt, Sabine; Denys, Alban; Aho, Serge

    2015-01-01

    To determine whether a mono-, bi- or tri-exponential model best fits the intravoxel incoherent motion (IVIM) diffusion-weighted imaging (DWI) signal of normal livers. The pilot and validation studies were conducted in 38 and 36 patients with normal livers, respectively. The DWI sequence was performed using single-shot echoplanar imaging with 11 (pilot study) and 16 (validation study) b values. In each study, data from all patients were used to model the IVIM signal of normal liver. Diffusion coefficients (D i ± standard deviations) and their fractions (f i ± standard deviations) were determined from each model. The models were compared using the extra sum-of-squares test and information criteria. The tri-exponential model provided a better fit than both the bi- and mono-exponential models. The tri-exponential IVIM model determined three diffusion compartments: a slow (D 1 = 1.35 ± 0.03 x 10 -3 mm 2 /s; f 1 = 72.7 ± 0.9 %), a fast (D 2 = 26.50 ± 2.49 x 10 -3 mm 2 /s; f 2 = 13.7 ± 0.6 %) and a very fast (D 3 = 404.00 ± 43.7 x 10 -3 mm 2 /s; f 3 = 13.5 ± 0.8 %) diffusion compartment [results from the validation study]. The very fast compartment contributed to the IVIM signal only for b values ≤15 s/mm 2 The tri-exponential model provided the best fit for IVIM signal decay in the liver over the 0-800 s/mm 2 range. In IVIM analysis of normal liver, a third very fast (pseudo)diffusion component might be relevant. (orig.)

  18. A new cellular automata model of traffic flow with negative exponential weighted look-ahead potential

    Science.gov (United States)

    Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye

    2016-10-01

    With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).

  19. Rényi statistics for testing composite hypotheses in general exponential models

    Czech Academy of Sciences Publication Activity Database

    Morales, D.; Pardo, L.; Pardo, M. C.; Vajda, Igor

    2004-01-01

    Roč. 38, č. 2 (2004), s. 133-147 ISSN 0233-1888 R&D Projects: GA ČR GA201/02/1391 Grant - others:BMF(ES) 2003-00892; BMF(ES) 2003-04820 Institutional research plan: CEZ:AV0Z1075907 Keywords : natural exponential models * Levy processes * generalized Wald statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.323, year: 2004

  20. The Poisson-exponential regression model under different latent activation schemes

    OpenAIRE

    Louzada, Francisco; Cancho, Vicente G; Barriga, Gladys D.C

    2012-01-01

    In this paper, a new family of survival distributions is presented. It is derived by considering that the latent number of failure causes follows a Poisson distribution and the time for these causes to be activated follows an exponential distribution. Three different activationschemes are also considered. Moreover, we propose the inclusion of covariates in the model formulation in order to study their effect on the expected value of the number of causes and on the failure rate function. Infer...

  1. Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models.

    Directory of Open Access Journals (Sweden)

    Maarten Marsman

    Full Text Available The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution.

  2. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control.

    Science.gov (United States)

    Yang, Shiju; Li, Chuandong; Huang, Tingwen

    2016-03-01

    The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A new approach to the extraction of single exponential diode model parameters

    Science.gov (United States)

    Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.

    2018-06-01

    A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.

  4. Two warehouse inventory model for deteriorating item with exponential demand rate and permissible delay in payment

    Directory of Open Access Journals (Sweden)

    Kaliraman Naresh Kumar

    2017-01-01

    Full Text Available A two warehouse inventory model for deteriorating items is considered with exponential demand rate and permissible delay in payment. Shortage is not allowed and deterioration rate is constant. In the model, one warehouse is rented and the other is owned. The rented warehouse is provided with better facility for the stock than the owned warehouse, but is charged more. The objective of this model is to find the best replenishment policies for minimizing the total appropriate inventory cost. A numerical illustration and sensitivity analysis is provided.

  5. Stretched exponential profiles of photoluminescence decays related to localized states in InGaAsN/GaAs single-quantum wells

    International Nuclear Information System (INIS)

    Nakayama, M.; Iguchi, Y.; Nomura, K.; Hashimoto, J.; Yamada, T.; Takagishi, S.

    2007-01-01

    We have investigated photoluminescence (PL) dynamics related to localized states in In x Ga 1-x As 1-y N y /GaAs single-quantum wells (SQWs) with the constant In content of x=0.32 and various N contents of y=0,0.004,and0.008. In order to determine the intrinsic band-edge energy, we used photoreflectance (PR) spectroscopy that is sensitive to the optical transitions at critical points. From systematic measurements of the PL and PR spectra, it is demonstrated that the slight incorporation of nitrogen considerably disorders the band-edge states of the InGaAsN SQWs, resulting from formation of localized states, so-called band-tail states. We find that the PL-decay profile related to the localized states generally exhibits a stretched exponential behavior peculiar to a disordered system at low temperatures, which means that randomness of alloy potential fluctuations including nitrogen dominates the PL dynamics

  6. Granular compaction and stretched exponentials - Experiments and a numerical stochastic model

    Directory of Open Access Journals (Sweden)

    Nicolas Maxime

    2017-01-01

    Full Text Available We present a stochastic model to investigate the compaction kinetics of a granular material submitted to vibration. The model is compared to experimental results obtained with glass beads and with a cohesive powder. We also propose a physical interpretation of the characteristic time τ and the exponent β of the stretched exponential function widely used to represent the granular compaction kinetics, and we show that the characteristic time is proportional to the number of grains to move. The exponent β is expressed as a logarithmic compaction rate.

  7. B decays and models for CP violation

    International Nuclear Information System (INIS)

    He, X.

    1996-01-01

    The decay modes B to ππ, ψK S , K - D, πK, and ηK are promising channels to study the unitarity triangle of the CP-violating Cabibbo-Kobayashi-Maskawa (CKM) matrix. In this paper I study the consequences of these measurements in the Weinberg model. I show that using the same set of measurements, the following different mechanisms for CP violation can be distinguished: (1) CP is violated in the CKM sector only; (2) CP is violated spontaneously in the Higgs sector only; and (3) CP is violated in both the CKM and Higgs sectors. copyright 1996 The American Physical Society

  8. Yield shear stress model of magnetorheological fluids based on exponential distribution

    International Nuclear Information System (INIS)

    Guo, Chu-wen; Chen, Fei; Meng, Qing-rui; Dong, Zi-xin

    2014-01-01

    The magnetic chain model that considers the interaction between particles and the external magnetic field in a magnetorheological fluid has been widely accepted. Based on the chain model, a yield shear stress model of magnetorheological fluids was proposed by introducing the exponential distribution to describe the distribution of angles between the direction of magnetic field and the chain formed by magnetic particles. The main influencing factors were considered in the model, such as magnetic flux density, intensity of magnetic field, particle size, volume fraction of particles, the angle of magnetic chain, and so on. The effect of magnetic flux density on the yield shear stress was discussed. The yield stress of aqueous Fe 3 O 4 magnetreological fluids with volume fraction of 7.6% and 16.2% were measured by a device designed by ourselves. The results indicate that the proposed model can be used for calculation of yield shear stress with acceptable errors. - Highlights: • A yield shear stress model of magnetorheological fluids was proposed. • Use exponential distribution to describe the distribution of magnetic chain angles. • Experimental and predicted results were in good agreement for 2 types of MR

  9. Open-System Quantum Annealing in Mean-Field Models with Exponential Degeneracy*

    Directory of Open Access Journals (Sweden)

    Kostyantyn Kechedzhi

    2016-05-01

    Full Text Available Real-life quantum computers are inevitably affected by intrinsic noise resulting in dissipative nonunitary dynamics realized by these devices. We consider an open-system quantum annealing algorithm optimized for such a realistic analog quantum device which takes advantage of noise-induced thermalization and relies on incoherent quantum tunneling at finite temperature. We theoretically analyze the performance of this algorithm considering a p-spin model that allows for a mean-field quasiclassical solution and, at the same time, demonstrates the first-order phase transition and exponential degeneracy of states, typical characteristics of spin glasses. We demonstrate that finite-temperature effects introduced by the noise are particularly important for the dynamics in the presence of the exponential degeneracy of metastable states. We determine the optimal regime of the open-system quantum annealing algorithm for this model and find that it can outperform simulated annealing in a range of parameters. Large-scale multiqubit quantum tunneling is instrumental for the quantum speedup in this model, which is possible because of the unusual nonmonotonous temperature dependence of the quantum-tunneling action in this model, where the most efficient transition rate corresponds to zero temperature. This model calculation is the first analytically tractable example where open-system quantum annealing algorithm outperforms simulated annealing, which can, in principle, be realized using an analog quantum computer.

  10. Fracture analysis of a central crack in a long cylindrical superconductor with exponential model

    Science.gov (United States)

    Zhao, Yu Feng; Xu, Chi

    2018-05-01

    The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.

  11. Consequences of models for monojet events from Z boson decay

    International Nuclear Information System (INIS)

    Baer, H.; Komamiya, S.; Hagiwara, K.

    1985-02-01

    Three models for monojet events with large missing transverse momentum observed at the CERN panti p collider are studied: i) Z decay into a neutral lepton pair where one of the pair decays within the detecter while the other escapes, ii) Z decay into two distinct neutral scalars where the lighter one is long lived, and iii) Z decay into two distinct higgsinos where the lighter one is long lived. The first model necessarily gives observable decay in flight signals. Consequences of the latter two models are investigated in both panti p collisions at CERN and e + e - annihilation at PETRA/PEP energies. (orig.)

  12. A novel approach to modelling non-exponential spin glass relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Pickup, R.M. [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom)]. E-mail: r.cywinski@leeds.ac.uk; Cywinski, R. [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Pappas, C. [Hahn-Meitner Institut, Glienicker Strasse 100, 14109 Berlin (Germany)

    2007-07-15

    A probabilistic cluster model, originally proposed by Weron to explain the universal power law of dielectric relaxation, is shown to account for the non-exponential relaxation in spin glasses above T {sub g}. Neutron spin echo spectra measured for the cluster glass compound Co{sub 55}Ga{sub 45} are well described by the Weron relaxation function, {phi}(t)={phi} {sub o}(1+k(t/{tau}) {sup {beta}}){sup -1/k}, with the interaction parameter k scaling linearly with the non-Curie-Weiss susceptibility.

  13. Exponential attractors for a Cahn-Hilliard model in bounded domains with permeable walls

    Directory of Open Access Journals (Sweden)

    Ciprian G. Gal

    2006-11-01

    Full Text Available In a previous article [7], we proposed a model of phase separation in a binary mixture confined to a bounded region which may be contained within porous walls. The boundary conditions were derived from a mass conservation law and variational methods. In the present paper, we study the problem further. Using a Faedo-Galerkin method, we obtain the existence and uniqueness of a global solution to our problem, under more general assumptions than those in [7]. We then study its asymptotic behavior and prove the existence of an exponential attractor (and thus of a global attractor with finite dimension.

  14. A method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  15. Diffusion-weighted MR imaging of pancreatic cancer: A comparison of mono-exponential, bi-exponential and non-Gaussian kurtosis models.

    Science.gov (United States)

    Kartalis, Nikolaos; Manikis, Georgios C; Loizou, Louiza; Albiin, Nils; Zöllner, Frank G; Del Chiaro, Marco; Marias, Kostas; Papanikolaou, Nikolaos

    2016-01-01

    To compare two Gaussian diffusion-weighted MRI (DWI) models including mono-exponential and bi-exponential, with the non-Gaussian kurtosis model in patients with pancreatic ductal adenocarcinoma. After written informed consent, 15 consecutive patients with pancreatic ductal adenocarcinoma underwent free-breathing DWI (1.5T, b-values: 0, 50, 150, 200, 300, 600 and 1000 s/mm 2 ). Mean values of DWI-derived metrics ADC, D, D*, f, K and D K were calculated from multiple regions of interest in all tumours and non-tumorous parenchyma and compared. Area under the curve was determined for all metrics. Mean ADC and D K showed significant differences between tumours and non-tumorous parenchyma (both P  < 0.001). Area under the curve for ADC, D, D*, f, K, and D K were 0.77, 0.52, 0.53, 0.62, 0.42, and 0.84, respectively. ADC and D K could differentiate tumours from non-tumorous parenchyma with the latter showing a higher diagnostic accuracy. Correction for kurtosis effects has the potential to increase the diagnostic accuracy of DWI in patients with pancreatic ductal adenocarcinoma.

  16. A production inventory model with exponential demand rate and reverse logistics

    Directory of Open Access Journals (Sweden)

    Ritu Raj

    2014-08-01

    Full Text Available The objective of this paper is to develop an integrated production inventory model for reworkable items with exponential demand rate. This is a three-layer supply chain model with perspectives of supplier, producer and retailer. Supplier delivers raw material to the producer and finished goods to the retailer. We consider perfect and imperfect quality products, product reliability and reworking of imperfect items. After screening, defective items reworked at a cost just after the regular manufacturing schedule. At the beginning, the manufacturing system starts produce perfect items, after some time the manufacturing system can undergo into “out-of-control” situation from “in-control” situation, which is controlled by reverse logistic technique. This paper deliberates the effects of business strategies like optimum order size of raw material, exponential demand rate, production rate is demand dependent, idle times and reverse logistics for an integrated marketing system. Mathematica is used to develop the optimal solution of production rate and raw material order for maximum expected average profit. A numerical example and sensitivity analysis is illustrated to validate the model.

  17. Radiative decays of vector mesons in the chiral bag model

    International Nuclear Information System (INIS)

    Tabachenko, A.N.

    1988-01-01

    A new model of radiative π-meson decays of vector mesons in the chiral bag model is proposed. The quark-π-meson interaction has the form of a pseudoscalar coupling and is located on the bag surface. The vector meson decay width depends on the quark masses, the π-meson decay constant, the radius of the bag, and the free parameter Z 2 , which specifies the disappearance of the bag during the decay. The obtained results for the omega- and p-decay widths are in satisfactory agreement with the experiment

  18. Research and realization of ultrasonic gas flow rate measurement based on ultrasonic exponential model.

    Science.gov (United States)

    Zheng, Dandan; Hou, Huirang; Zhang, Tao

    2016-04-01

    For ultrasonic gas flow rate measurement based on ultrasonic exponential model, when the noise frequency is close to that of the desired signals (called similar-frequency noise) or the received signal amplitude is small and unstable at big flow rate, local convergence of the algorithm genetic-ant colony optimization-3cycles may appear, and measurement accuracy may be affected. Therefore, an improved method energy genetic-ant colony optimization-3cycles (EGACO-3cycles) is proposed to solve this problem. By judging the maximum energy position of signal, the initial parameter range of exponential model can be narrowed and then the local convergence can be avoided. Moreover, a DN100 flow rate measurement system with EGACO-3cycles method is established based on NI PCI-6110 and personal computer. A series of experiments are carried out for testing the new method and the measurement system. It is shown that local convergence doesn't appear with EGACO-3cycles method when similar-frequency noises exist and flow rate is big. Then correct time of flight can be obtained. Furthermore, through flow calibration on this system, the measurement range ratio is achieved 500:1, and the measurement accuracy is 0.5% with a low transition velocity 0.3 m/s. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A FAST SEGMENTATION ALGORITHM FOR C-V MODEL BASED ON EXPONENTIAL IMAGE SEQUENCE GENERATION

    Directory of Open Access Journals (Sweden)

    J. Hu

    2017-09-01

    Full Text Available For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1 the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2 the initial value of SDF (Signal Distance Function and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3 the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  20. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    Science.gov (United States)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  1. Bayesian Exponential Smoothing.

    OpenAIRE

    Forbes, C.S.; Snyder, R.D.; Shami, R.S.

    2000-01-01

    In this paper, a Bayesian version of the exponential smoothing method of forecasting is proposed. The approach is based on a state space model containing only a single source of error for each time interval. This model allows us to improve current practices surrounding exponential smoothing by providing both point predictions and measures of the uncertainty surrounding them.

  2. Dynamics of quintessence models of dark energy with exponential coupling to dark matter

    International Nuclear Information System (INIS)

    Gonzalez, Tame; Leon, Genly; Quiros, Israel

    2006-01-01

    We explore quintessence models of dark energy which exhibit non-minimal coupling between the dark matter and dark energy components of the cosmic fluid. The kind of coupling chosen is inspired by scalar-tensor theories of gravity. We impose a suitable dynamics of the expansion allowing us to derive exact Friedmann-Robertson-Walker solutions once the coupling function is given as input. Self-interaction potentials of single and double exponential types emerge as a result of our choice of the coupling function. The stability and existence of the solutions are discussed in some detail. Although, in general, models with appropriate interaction between the components of the cosmic mixture are useful for handling the coincidence problem, in the present study this problem cannot be avoided due to the choice of solution generating ansatz

  3. Option pricing under stochastic volatility: the exponential Ornstein–Uhlenbeck model

    International Nuclear Information System (INIS)

    Perelló, Josep; Masoliver, Jaume; Sircar, Ronnie

    2008-01-01

    We study the pricing problem for a European call option when the volatility of the underlying asset is random and follows the exponential Ornstein–Uhlenbeck model. The random diffusion model proposed is a two-dimensional market process that takes a log-Brownian motion to describe price dynamics and an Ornstein–Uhlenbeck subordinated process describing the randomness of the log-volatility. We derive an approximate option price that is valid when (i) the fluctuations of the volatility are larger than its normal level, (ii) the volatility presents a slow driving force, toward its normal level and, finally, (iii) the market price of risk is a linear function of the log-volatility. We study the resulting European call price and its implied volatility for a range of parameters consistent with daily Dow Jones index data

  4. Breast lesion characterization using whole-lesion histogram analysis with stretched-exponential diffusion model.

    Science.gov (United States)

    Liu, Chunling; Wang, Kun; Li, Xiaodan; Zhang, Jine; Ding, Jie; Spuhler, Karl; Duong, Timothy; Liang, Changhong; Huang, Chuan

    2018-06-01

    Diffusion-weighted imaging (DWI) has been studied in breast imaging and can provide more information about diffusion, perfusion and other physiological interests than standard pulse sequences. The stretched-exponential model has previously been shown to be more reliable than conventional DWI techniques, but different diagnostic sensitivities were found from study to study. This work investigated the characteristics of whole-lesion histogram parameters derived from the stretched-exponential diffusion model for benign and malignant breast lesions, compared them with conventional apparent diffusion coefficient (ADC), and further determined which histogram metrics can be best used to differentiate malignant from benign lesions. This was a prospective study. Seventy females were included in the study. Multi-b value DWI was performed on a 1.5T scanner. Histogram parameters of whole lesions for distributed diffusion coefficient (DDC), heterogeneity index (α), and ADC were calculated by two radiologists and compared among benign lesions, ductal carcinoma in situ (DCIS), and invasive carcinoma confirmed by pathology. Nonparametric tests were performed for comparisons among invasive carcinoma, DCIS, and benign lesions. Comparisons of receiver operating characteristic (ROC) curves were performed to show the ability to discriminate malignant from benign lesions. The majority of histogram parameters (mean/min/max, skewness/kurtosis, 10-90 th percentile values) from DDC, α, and ADC were significantly different among invasive carcinoma, DCIS, and benign lesions. DDC 10% (area under curve [AUC] = 0.931), ADC 10% (AUC = 0.893), and α mean (AUC = 0.787) were found to be the best metrics in differentiating benign from malignant tumors among all histogram parameters derived from ADC and α, respectively. The combination of DDC 10% and α mean , using logistic regression, yielded the highest sensitivity (90.2%) and specificity (95.5%). DDC 10% and α mean derived from

  5. Hybrid model for the decay of nuclear giant resonances

    International Nuclear Information System (INIS)

    Hussein, M.S.

    1986-12-01

    The decay properties of nuclear giant multipole resonances are discussed within a hybrid model that incorporates, in a unitary consistent way, both the coherent and statistical features. It is suggested that the 'direct' decay of the GR is described with continuum first RPA and the statistical decay calculated with a modified Hauser-Feshbach model. Application is made to the decay of the giant monopole resonance in 208 Pb. Suggestions are made concerning the calculation of the mixing parameter using the statistical properties of the shell model eigenstates at high excitation energies. (Author) [pt

  6. Modeling the pre-industrial roots of modern super-exponential population growth.

    Science.gov (United States)

    Stutz, Aaron Jonas

    2014-01-01

    To Malthus, rapid human population growth-so evident in 18th Century Europe-was obviously unsustainable. In his Essay on the Principle of Population, Malthus cogently argued that environmental and socioeconomic constraints on population rise were inevitable. Yet, he penned his essay on the eve of the global census size reaching one billion, as nearly two centuries of super-exponential increase were taking off. Introducing a novel extension of J. E. Cohen's hallmark coupled difference equation model of human population dynamics and carrying capacity, this article examines just how elastic population growth limits may be in response to demographic change. The revised model involves a simple formalization of how consumption costs influence carrying capacity elasticity over time. Recognizing that complex social resource-extraction networks support ongoing consumption-based investment in family formation and intergenerational resource transfers, it is important to consider how consumption has impacted the human environment and demography--especially as global population has become very large. Sensitivity analysis of the consumption-cost model's fit to historical population estimates, modern census data, and 21st Century demographic projections supports a critical conclusion. The recent population explosion was systemically determined by long-term, distinctly pre-industrial cultural evolution. It is suggested that modern globalizing transitions in technology, susceptibility to infectious disease, information flows and accumulation, and economic complexity were endogenous products of much earlier biocultural evolution of family formation's embeddedness in larger, hierarchically self-organizing cultural systems, which could potentially support high population elasticity of carrying capacity. Modern super-exponential population growth cannot be considered separately from long-term change in the multi-scalar political economy that connects family formation and

  7. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    Science.gov (United States)

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  8. Orphan Drug Pricing: An Original Exponential Model Relating Price to the Number of Patients

    Directory of Open Access Journals (Sweden)

    Andrea Messori

    2016-10-01

    Full Text Available In managing drug prices at the national level, orphan drugs represent a special case because the price of these agents is higher than that determined according to value-based principles. A common practice is to set the orphan drug price in an inverse relationship with the number of patients, so that the price increases as the number of patients decreases. Determination of prices in this context generally has a purely empirical nature, but a theoretical basis would be needed. The present paper describes an original exponential model that manages the relationship between price and number of patients for orphan drugs. Three real examples are analysed in detail (eculizumab, bosentan, and a data set of 17 orphan drugs published in 2010. These analyses have been aimed at identifying some objective criteria to rationally inform this relationship between prices and patients and at converting these criteria into explicit quantitative rules.

  9. Exponential Cardassian universe

    International Nuclear Information System (INIS)

    Liu Daojun; Sun Changbo; Li Xinzhou

    2006-01-01

    The expectation of explaining cosmological observations without requiring new energy sources is forsooth worthy of investigation. In this Letter, a new kind of Cardassian models, called exponential Cardassian models, for the late-time universe are investigated in the context of the spatially flat FRW universe scenario. We fit the exponential Cardassian models to current type Ia supernovae data and find they are consistent with the observations. Furthermore, we point out that the equation-of-state parameter for the effective dark fluid component in exponential Cardassian models can naturally cross the cosmological constant divide w=-1 that observations favor mildly without introducing exotic material that destroy the weak energy condition

  10. Intravoxel water diffusion heterogeneity MR imaging of nasopharyngeal carcinoma using stretched exponential diffusion model

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Vincent; Khong, Pek Lan [University of Hong Kong, Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, Queen Mary Hospital, Pok Fu Lam (China); Lee, Victor Ho Fun; Lam, Ka On; Sze, Henry Chun Kin [University of Hong Kong, Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, Queen Mary Hospital, Pok Fu Lam (China); Chan, Queenie [Philips Healthcare, Hong Kong, Shatin, New Territories (China)

    2015-06-01

    To determine the utility of stretched exponential diffusion model in characterisation of the water diffusion heterogeneity in different tumour stages of nasopharyngeal carcinoma (NPC). Fifty patients with newly diagnosed NPC were prospectively recruited. Diffusion-weighted MR imaging was performed using five b values (0-2,500 s/mm{sup 2}). Respective stretched exponential parameters (DDC, distributed diffusion coefficient; and alpha (α), water heterogeneity) were calculated. Patients were stratified into low and high tumour stage groups based on the American Joint Committee on Cancer (AJCC) staging for determination of the predictive powers of DDC and α using t test and ROC curve analyses. The mean ± standard deviation values were DDC = 0.692 ± 0.199 (x 10{sup -3} mm{sup 2}/s) for low stage group vs 0.794 ± 0.253 (x 10{sup -3} mm{sup 2}/s) for high stage group; α = 0.792 ± 0.145 for low stage group vs 0.698 ± 0.155 for high stage group. α was significantly lower in the high stage group while DDC was negatively correlated. DDC and α were both reliable independent predictors (p < 0.001), with α being more powerful. Optimal cut-off values were (sensitivity, specificity, positive likelihood ratio, negative likelihood ratio) DDC = 0.692 x 10{sup -3} mm{sup 2}/s (94.4 %, 64.3 %, 2.64, 0.09), α = 0.720 (72.2 %, 100 %, -, 0.28). The heterogeneity index α is robust and can potentially help in staging and grading prediction in NPC. (orig.)

  11. Intravoxel water diffusion heterogeneity MR imaging of nasopharyngeal carcinoma using stretched exponential diffusion model

    International Nuclear Information System (INIS)

    Lai, Vincent; Khong, Pek Lan; Lee, Victor Ho Fun; Lam, Ka On; Sze, Henry Chun Kin; Chan, Queenie

    2015-01-01

    To determine the utility of stretched exponential diffusion model in characterisation of the water diffusion heterogeneity in different tumour stages of nasopharyngeal carcinoma (NPC). Fifty patients with newly diagnosed NPC were prospectively recruited. Diffusion-weighted MR imaging was performed using five b values (0-2,500 s/mm 2 ). Respective stretched exponential parameters (DDC, distributed diffusion coefficient; and alpha (α), water heterogeneity) were calculated. Patients were stratified into low and high tumour stage groups based on the American Joint Committee on Cancer (AJCC) staging for determination of the predictive powers of DDC and α using t test and ROC curve analyses. The mean ± standard deviation values were DDC = 0.692 ± 0.199 (x 10 -3 mm 2 /s) for low stage group vs 0.794 ± 0.253 (x 10 -3 mm 2 /s) for high stage group; α = 0.792 ± 0.145 for low stage group vs 0.698 ± 0.155 for high stage group. α was significantly lower in the high stage group while DDC was negatively correlated. DDC and α were both reliable independent predictors (p < 0.001), with α being more powerful. Optimal cut-off values were (sensitivity, specificity, positive likelihood ratio, negative likelihood ratio) DDC = 0.692 x 10 -3 mm 2 /s (94.4 %, 64.3 %, 2.64, 0.09), α = 0.720 (72.2 %, 100 %, -, 0.28). The heterogeneity index α is robust and can potentially help in staging and grading prediction in NPC. (orig.)

  12. Modeling Radionuclide Decay Chain Migration Using HYDROGEOCHEM

    Science.gov (United States)

    Lin, T. C.; Tsai, C. H.; Lai, K. H.; Chen, J. S.

    2014-12-01

    Nuclear technology has been employed for energy production for several decades. Although people receive many benefits from nuclear energy, there are inevitably environmental pollutions as well as human health threats posed by the radioactive materials releases from nuclear waste disposed in geological repositories or accidental releases of radionuclides from nuclear facilities. Theoretical studies have been undertaken to understand the transport of radionuclides in subsurface environments because that the radionuclide transport in groundwater is one of the main pathway in exposure scenarios for the intake of radionuclides. The radionuclide transport in groundwater can be predicted using analytical solution as well as numerical models. In this study, we simulate the transport of the radionuclide decay chain using HYDROGEOCHEM. The simulated results are verified against the analytical solution available in the literature. Excellent agreements between the numerical simulation and the analytical are observed for a wide spectrum of concentration. HYDROGECHEM is a useful tool assessing the ecological and environmental impact of the accidental radionuclide releases such as the Fukushima nuclear disaster where multiple radionuclides leaked through the reactor, subsequently contaminating the local groundwater and ocean seawater in the vicinity of the nuclear plant.

  13. Dynamics of the exponential integrate-and-fire model with slow currents and adaptation.

    Science.gov (United States)

    Barranca, Victor J; Johnson, Daniel C; Moyher, Jennifer L; Sauppe, Joshua P; Shkarayev, Maxim S; Kovačič, Gregor; Cai, David

    2014-08-01

    In order to properly capture spike-frequency adaptation with a simplified point-neuron model, we study approximations of Hodgkin-Huxley (HH) models including slow currents by exponential integrate-and-fire (EIF) models that incorporate the same types of currents. We optimize the parameters of the EIF models under the external drive consisting of AMPA-type conductance pulses using the current-voltage curves and the van Rossum metric to best capture the subthreshold membrane potential, firing rate, and jump size of the slow current at the neuron's spike times. Our numerical simulations demonstrate that, in addition to these quantities, the approximate EIF-type models faithfully reproduce bifurcation properties of the HH neurons with slow currents, which include spike-frequency adaptation, phase-response curves, critical exponents at the transition between a finite and infinite number of spikes with increasing constant external drive, and bifurcation diagrams of interspike intervals in time-periodically forced models. Dynamics of networks of HH neurons with slow currents can also be approximated by corresponding EIF-type networks, with the approximation being at least statistically accurate over a broad range of Poisson rates of the external drive. For the form of external drive resembling realistic, AMPA-like synaptic conductance response to incoming action potentials, the EIF model affords great savings of computation time as compared with the corresponding HH-type model. Our work shows that the EIF model with additional slow currents is well suited for use in large-scale, point-neuron models in which spike-frequency adaptation is important.

  14. Modeling of Single Event Transients With Dual Double-Exponential Current Sources: Implications for Logic Cell Characterization

    Science.gov (United States)

    Black, Dolores A.; Robinson, William H.; Wilcox, Ian Z.; Limbrick, Daniel B.; Black, Jeffrey D.

    2015-08-01

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. The parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.

  15. Tests of the standard electroweak model in beta decay

    Energy Technology Data Exchange (ETDEWEB)

    Severijns, N.; Beck, M. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium); Naviliat-Cuncic, O. [Caen Univ., CNRS-ENSI, 14 (France). Lab. de Physique Corpusculaire

    2006-05-15

    We review the current status of precision measurements in allowed nuclear beta decay, including neutron decay, with emphasis on their potential to look for new physics beyond the standard electroweak model. The experimental results are interpreted in the framework of phenomenological model-independent descriptions of nuclear beta decay as well as in some specific extensions of the standard model. The values of the standard couplings and the constraints on the exotic couplings of the general beta decay Hamiltonian are updated. For the ratio between the axial and the vector couplings we obtain C{sub A},/C{sub V} = -1.26992(69) under the standard model assumptions. Particular attention is devoted to the discussion of the sensitivity and complementarity of different precision experiments in direct beta decay. The prospects and the impact of recent developments of precision tools and of high intensity low energy beams are also addressed. (author)

  16. Tests of the standard electroweak model in beta decay

    International Nuclear Information System (INIS)

    Severijns, N.; Beck, M.; Naviliat-Cuncic, O.

    2006-05-01

    We review the current status of precision measurements in allowed nuclear beta decay, including neutron decay, with emphasis on their potential to look for new physics beyond the standard electroweak model. The experimental results are interpreted in the framework of phenomenological model-independent descriptions of nuclear beta decay as well as in some specific extensions of the standard model. The values of the standard couplings and the constraints on the exotic couplings of the general beta decay Hamiltonian are updated. For the ratio between the axial and the vector couplings we obtain C A ,/C V = -1.26992(69) under the standard model assumptions. Particular attention is devoted to the discussion of the sensitivity and complementarity of different precision experiments in direct beta decay. The prospects and the impact of recent developments of precision tools and of high intensity low energy beams are also addressed. (author)

  17. Comparison of bi-exponential and mono-exponential models of diffusion-weighted imaging for detecting active sacroiliitis in ankylosing spondylitis.

    Science.gov (United States)

    Sun, Haitao; Liu, Kai; Liu, Hao; Ji, Zongfei; Yan, Yan; Jiang, Lindi; Zhou, Jianjun

    2018-04-01

    Background There has been a growing need for a sensitive and effective imaging method for the differentiation of the activity of ankylosing spondylitis (AS). Purpose To compare the performances of intravoxel incoherent motion (IVIM)-derived parameters and the apparent diffusion coefficient (ADC) for distinguishing AS-activity. Material and Methods One hundred patients with AS were divided into active (n = 51) and non-active groups (n = 49) and 21 healthy volunteers were included as control. The ADC, diffusion coefficient ( D), pseudodiffusion coefficient ( D*), and perfusion fraction ( f) were calculated for all groups. Kruskal-Wallis tests and receiver operator characteristic (ROC) curve analysis were performed for all parameters. Results There was good reproducibility of ADC /D and relatively poor reproducibility of D*/f. ADC, D, and f were significantly higher in the active group than in the non-active and control groups (all P  0.050). In the ROC analysis, ADC had the largest AUC for distinguishing between the active group and the non-active group (0.988) and between the active and control groups (0.990). Multivariate logistic regression analysis models showed no diagnostic improvement. Conclusion ADC provided better diagnostic performance than IVIM-derived parameters in differentiating AS activity. Therefore, a straightforward and effective mono-exponential model of diffusion-weighted imaging may be sufficient for differentiating AS activity in the clinic.

  18. The modelled raindrop size distribution of Skudai, Peninsular Malaysia, using exponential and lognormal distributions.

    Science.gov (United States)

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance.

  19. Stochastic Threshold Exponential (TE) Model for Hematopoietic Tissue Reconstitution Deficit after Radiation Damage.

    Science.gov (United States)

    Scott, B R; Potter, C A

    2014-07-01

    Whole-body exposure to large radiation doses can cause severe loss of hematopoietic tissue cells and threaten life if the lost cells are not replaced in a timely manner through natural repopulation (a homeostatic mechanism). Repopulation to the baseline level N 0 is called reconstitution and a reconstitution deficit (repopulation shortfall) can occur in a dose-related and organ-specific manner. Scott et al. (2013) previously introduced a deterministic version of a threshold exponential (TE) model of tissue-reconstitution deficit at a given follow-up time that was applied to bone marrow and spleen cellularity (number of constituent cells) data obtained 6 weeks after whole-body gamma-ray exposure of female C.B-17 mice. In this paper a more realistic, stochastic version of the TE model is provided that allows radiation response to vary between different individuals. The Stochastic TE model is applied to post gamma-ray-exposure cellularity data previously reported and also to more limited X-ray cellularity data for whole-body irradiated female C.B-17 mice. Results indicate that the population average threshold for a tissue reconstitution deficit appears to be similar for bone marrow and spleen and for 320-kV-spectrum X-rays and Cs-137 gamma rays. This means that 320-kV spectrum X-rays could successfully be used in conducting such studies.

  20. The Modelled Raindrop Size Distribution of Skudai, Peninsular Malaysia, Using Exponential and Lognormal Distributions

    Science.gov (United States)

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597

  1. Global Exponential Stability of Positive Almost Periodic Solutions for a Fishing Model with a Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2014-01-01

    Full Text Available This paper is concerned with a nonautonomous fishing model with a time-varying delay. Under proper conditions, we employ a novel argument to establish a criterion on the global exponential stability of positive almost periodic solutions of the model with almost periodic coefficients and delays. Moreover, an example and its numerical simulation are given to illustrate the main results.

  2. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  3. Rare B-decays in the standard model

    International Nuclear Information System (INIS)

    Ali, A.; Greub, C.; Mannel, T.

    1993-02-01

    We review theoretical work done in studies of the Flavour Changing Neutral Current (FCNC) B-decays in the context of the Standard Model. Making use of the QCD-improved effective Hamiltonian describing the so-called vertical stroke ΔBvertical stroke =1 and vertical stroke ΔBvertical stroke =2, vertical stroke ΔQvertical stroke =0 transitions, we calculate the rates and differential distributions in a large number of B-decays. The FCNC processes discussed here include the radiative decays B → X s + γ, B → X d + γ, and the semileptonic decays B → X s l + l - , B → X d l + l - , B → X s ν l anti ν l , and B → X d ν l anti ν l . We also discuss the inclusive photon energy spectrum calculated from the Charged Current (CC) decays B → X c + γ and B → X u + γ and the mentioned FCNC radiative decays. The importance of carrying out measurements of the inclusive photon energy spectrum in B-decays is emphasized. Using phenomenological potential models and the Heavy Quark Effective Theory (HQET) we estimate decay branching ratios in a number of exclusive FCNC B-decays. Purely leptonic and photonic decays (B d , B s ) → l + l - and (B d , B s ) → γγ are also estimated. The principal interest in the studies of FCNC B-decays lies in their use in determining the parameters of the standard Model, in particular the CKM matrix elements and the top quark mass. The parametric dependence of these and other QCD-specific parameters on the rates and distributions is worked out numerically. (orig.)

  4. Truncated exponential-rigid-rotor model for strong electron and ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.; Fleischmann, H.H.

    1979-01-01

    A comprehensive study of exponential-rigid-rotor equilibria for strong electron and ion rings indicates the presence of a sizeable percentage of untrapped particles in all equilibria with aspect-ratios R/a approximately <4. Such aspect-ratios are required in fusion-relevant rings. Significant changes in the equilibria are observed when untrapped particles are excluded by the use of a truncated exponential-rigid-rotor distribution function. (author)

  5. On Uniform Decay of the Entropy for Reaction–Diffusion Systems

    KAUST Repository

    Mielke, Alexander; Haskovec, Jan; Markowich, Peter A.

    2014-01-01

    This work provides entropy decay estimates for classes of nonlinear reaction–diffusion systems modeling reversible chemical reactions under the detailed-balance condition. We obtain explicit bounds for the exponential decay of the relative

  6. Exclusive semileptonic B decays in the bag model

    International Nuclear Information System (INIS)

    Lie-Svendsen, Oe.

    1989-09-01

    Using a recoil corrected version of the MIT bag model, the author has computed the relevant form factors and decay rates for the exclusive semileptonic decays B→De anti ν e and B→D*e anti ν e. The results are similar to other quark model calculations, but it is shown that the D* mode is suppressed due to the influence of the spectator quark. The D*'s produced are almost unpolarized in this model

  7. Non-relativistic model of two-particle decay

    International Nuclear Information System (INIS)

    Dittrich, J.; Exner, P.

    1986-01-01

    A simple non-relativistic model of a spinless particle decaying into two lighter particles is treated in detail. It is similar to the Lee-model description of V-particle decay. Galilean covariance is formulated properly, by means of a unitary projective representation acting on the state space of the model. After separating the centre-of-mass motion the meromorphic structure of the reduced resolvent is deduced

  8. ψ-Epistemic Models are Exponentially Bad at Explaining the Distinguishability of Quantum States

    Science.gov (United States)

    Leifer, M. S.

    2014-04-01

    The status of the quantum state is perhaps the most controversial issue in the foundations of quantum theory. Is it an epistemic state (state of knowledge) or an ontic state (state of reality)? In realist models of quantum theory, the epistemic view asserts that nonorthogonal quantum states correspond to overlapping probability measures over the true ontic states. This naturally accounts for a large number of otherwise puzzling quantum phenomena. For example, the indistinguishability of nonorthogonal states is explained by the fact that the ontic state sometimes lies in the overlap region, in which case there is nothing in reality that could distinguish the two states. For this to work, the amount of overlap of the probability measures should be comparable to the indistinguishability of the quantum states. In this Letter, I exhibit a family of states for which the ratio of these two quantities must be ≤2de-cd in Hilbert spaces of dimension d that are divisible by 4. This implies that, for large Hilbert space dimension, the epistemic explanation of indistinguishability becomes implausible at an exponential rate as the Hilbert space dimension increases.

  9. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    Science.gov (United States)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  10. On exponential cosmological type solutions in the model with Gauss-Bonnet term and variation of gravitational constant

    International Nuclear Information System (INIS)

    Ivashchuk, V.D.; Kobtsev, A.A.

    2015-01-01

    A D-dimensional gravitational model with Gauss.Bonnet term is considered. When an ansatz with diagonal cosmological type metrics is adopted, we find solutions with an exponential dependence of the scale factors (with respect to a @gsynchronous-like@h variable) which describe an exponential expansion of @gour@h 3-dimensional factor space and obey the observational constraints on the temporal variation of effective gravitational constant G. Among them there are two exact solutions in dimensions D = 22, 28 with constant G and also an infinite series of solutions in dimensions D ≥ 2690 with the variation of G obeying the observational data. (orig.)

  11. On exponential cosmological type solutions in the model with Gauss-Bonnet term and variation of gravitational constant

    Energy Technology Data Exchange (ETDEWEB)

    Ivashchuk, V.D. [VNIIMS, Center for Gravitation and Fundamental Metrology, Moscow (Russian Federation); Peoples' Friendship University of Russia, Institute of Gravitation and Cosmology, Moscow (Russian Federation); Kobtsev, A.A. [Peoples' Friendship University of Russia, Institute of Gravitation and Cosmology, Moscow (Russian Federation)

    2015-05-15

    A D-dimensional gravitational model with Gauss.Bonnet term is considered. When an ansatz with diagonal cosmological type metrics is adopted, we find solutions with an exponential dependence of the scale factors (with respect to a @gsynchronous-like@h variable) which describe an exponential expansion of @gour@h 3-dimensional factor space and obey the observational constraints on the temporal variation of effective gravitational constant G. Among them there are two exact solutions in dimensions D = 22, 28 with constant G and also an infinite series of solutions in dimensions D ≥ 2690 with the variation of G obeying the observational data. (orig.)

  12. An exponential model equation for thiamin loss in irradiated ground pork as a function of dose and temperature of irradiation

    Science.gov (United States)

    Fox, J. B.; Thayer, D. W.; Phillips, J. G.

    The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.

  13. Decay constants and radiative decays of heavy mesons in light-front quark model

    International Nuclear Information System (INIS)

    Choi, Ho-Meoyng

    2007-01-01

    We investigate the magnetic dipole decays V→Pγ of various heavy-flavored mesons such as (D,D*,D s ,D s *,η c ,J/ψ) and (B,B*,B s ,B s *,η b ,Υ) using the light-front quark model constrained by the variational principle for the QCD-motivated effective Hamiltonian. The momentum dependent form factors F VP (q 2 ) for V→Pγ* decays are obtained in the q + =0 frame and then analytically continued to the timelike region by changing q perpendicular to iq perpendicular in the form factors. The coupling constant g VPγ for real photon case is then obtained in the limit as q 2 →0, i.e. g VPγ =F VP (q 2 =0). The weak decay constants of heavy pseudoscalar and vector mesons are also calculated. Our numerical results for the decay constants and radiative decay widths for the heavy-flavored mesons are overall in good agreement with the available experimental data as well as other theoretical model calculations

  14. Multimesonic decays of charmonium states in the statistical quark model

    International Nuclear Information System (INIS)

    Montvay, I.; Toth, J.D.

    1978-01-01

    The data known at present of multimesonic decays of chi and psi states are fitted in a statistical quark model, in which the matrix elements are assumed to be constant and resonances as well as both strong and second order electromagnetic processes are taken into account. The experimental data are well reproduced by the model. Unknown branching ratios for the rest of multimesonic channels are predicted. The fit leaves about 40% for baryonic and radiative channels in the case of J/psi(3095). The fitted parameters of the J/psi decays are used to predict the mesonic decays of the pseudoscalar eta c. The statistical quark model seems to allow the calculation of competitive multiparticle processes for the studied decays. (D.P.)

  15. Prediction of Pig Trade Movements in Different European Production Systems Using Exponential Random Graph Models.

    Science.gov (United States)

    Relun, Anne; Grosbois, Vladimir; Alexandrov, Tsviatko; Sánchez-Vizcaíno, Jose M; Waret-Szkuta, Agnes; Molia, Sophie; Etter, Eric Marcel Charles; Martínez-López, Beatriz

    2017-01-01

    In most European countries, data regarding movements of live animals are routinely collected and can greatly aid predictive epidemic modeling. However, the use of complete movements' dataset to conduct policy-relevant predictions has been so far limited by the massive amount of data that have to be processed (e.g., in intensive commercial systems) or the restricted availability of timely and updated records on animal movements (e.g., in areas where small-scale or extensive production is predominant). The aim of this study was to use exponential random graph models (ERGMs) to reproduce, understand, and predict pig trade networks in different European production systems. Three trade networks were built by aggregating movements of pig batches among premises (farms and trade operators) over 2011 in Bulgaria, Extremadura (Spain), and Côtes-d'Armor (France), where small-scale, extensive, and intensive pig production are predominant, respectively. Three ERGMs were fitted to each network with various demographic and geographic attributes of the nodes as well as six internal network configurations. Several statistical and graphical diagnostic methods were applied to assess the goodness of fit of the models. For all systems, both exogenous (attribute-based) and endogenous (network-based) processes appeared to govern the structure of pig trade network, and neither alone were capable of capturing all aspects of the network structure. Geographic mixing patterns strongly structured pig trade organization in the small-scale production system, whereas belonging to the same company or keeping pigs in the same housing system appeared to be key drivers of pig trade, in intensive and extensive production systems, respectively. Heterogeneous mixing between types of production also explained a part of network structure, whichever production system considered. Limited information is thus needed to capture most of the global structure of pig trade networks. Such findings will be useful

  16. Modelling of Creep and Stress Relaxation Test of a Polypropylene Microfibre by Using Fraction-Exponential Kernel

    Directory of Open Access Journals (Sweden)

    Andrea Sorzia

    2016-01-01

    Full Text Available A tensile test until breakage and a creep and relaxation test on a polypropylene fibre are carried out and the resulting creep and stress relaxation curves are fit by a model adopting a fraction-exponential kernel in the viscoelastic operator. The models using fraction-exponential functions are simpler than the complex ones obtained from combination of dashpots and springs and, furthermore, are suitable for fitting experimental data with good approximation allowing, at the same time, obtaining inverse Laplace transform in closed form. Therefore, the viscoelastic response of polypropylene fibres can be modelled straightforwardly through analytical methods. Addition of polypropylene fibres greatly improves the tensile strength of composite materials with concrete matrix. The proposed analytical model can be employed for simulating the mechanical behaviour of composite materials with embedded viscoelastic fibres.

  17. Non centered minor hysteresis loops evaluation based on exponential parameters transforms of the modified inverse Jiles–Atherton model

    International Nuclear Information System (INIS)

    Hamimid, M.; Mimoune, S.M.; Feliachi, M.; Atallah, K.

    2014-01-01

    In this present work, a non centered minor hysteresis loops evaluation is performed using the exponential transforms (ET) of the modified inverse Jiles–Atherton model parameters. This model improves the non centered minor hysteresis loops representation. The parameters of the non centered minor hysteresis loops are obtained from exponential expressions related to the major ones. The parameters of minor loops are obtained by identification using the stochastic optimization method “simulated annealing”. The four parameters of JA model (a,α, k and c) obtained by this transformation are applied only in both ascending and descending branches of the non centered minor hysteresis loops while the major ones are applied to the rest of the cycle. This proposal greatly improves both branches and consequently the minor loops. To validate this model, calculated non-centered minor hysteresis loops are compared with measured ones and good agreements are obtained

  18. Semileptonic Decays of Heavy Omega Baryons in a Quark Model

    International Nuclear Information System (INIS)

    Muslema Pervin; Winston Roberts; Simon Capstick

    2006-01-01

    The semileptonic decays of (Omega) c and (Omega) b are treated in the framework of a constituent quark model developed in a previous paper on the semileptonic decays of heavy Λ baryons. Analytic results for the form factors for the decays to ground states and a number of excited states are evaluated. For (Omega) b to (Omega) c the form factors obtained are shown to satisfy the relations predicted at leading order in the heavy-quark effective theory at the non-recoil point. A modified fit of nonrelativistic and semirelativistic Hamiltonians generates configuration-mixed baryon wave functions from the known masses and the measured Λ c + → Λe + ν rate, with wave functions expanded in both harmonic oscillator and Sturmian bases. Decay rates of (Omega) b to pairs of ground and excited (Omega) c states related by heavy-quark symmetry calculated using these configuration-mixed wave functions are in the ratios expected from heavy-quark effective theory, to a good approximation. Our predictions for the semileptonic elastic branching fraction of (Omega) Q vary minimally within the models we use. We obtain an average value of (84 ± 2%) for the fraction of (Omega) c → Ξ (*) decays to ground states, and 91% for the fraction of (Omega) c → (Omega) (*) decays to the ground state (Omega). The elastic fraction of (Omega) b → (Omega) c ranges from about 50% calculated with the two harmonic-oscillator models, to about 67% calculated with the two Sturmian models

  19. A modified exponential behavioral economic demand model to better describe consumption data.

    Science.gov (United States)

    Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K

    2015-12-01

    Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  20. Recoil corrected bag model calculations for semileptonic weak decays

    International Nuclear Information System (INIS)

    Lie-Svendsen, Oe.; Hoegaasen, H.

    1987-02-01

    Recoil corrections to various model results for strangeness changing weak decay amplitudes have been developed. It is shown that the spurious reference frame dependence of earlier calculations is reduced. The second class currents are generally less important than obtained by calculations in the static approximation. Theoretical results are compared to observations. The agreement is quite good, although the values for the Cabibbo angle obtained by fits to the decay rates are somewhat to large

  1. The Standard Model and the neutron beta-decay

    CERN Document Server

    Abele, H

    2000-01-01

    This article reviews the relationship between the observables in neutron beta-decay and the accepted modern theory of particle physics known as the Standard Model. Recent neutron-decay measurements of various mixed American-British-French-German-Russian collaborations try to shed light on the following topics: the coupling strength of charged weak currents, the universality of the electroweak interaction and the origin of parity violation.

  2. Modeling decay rates of dead wood in a neotropical forest.

    Science.gov (United States)

    Hérault, Bruno; Beauchêne, Jacques; Muller, Félix; Wagner, Fabien; Baraloto, Christopher; Blanc, Lilian; Martin, Jean-Michel

    2010-09-01

    Variation of dead wood decay rates among tropical trees remains one source of uncertainty in global models of the carbon cycle. Taking advantage of a broad forest plot network surveyed for tree mortality over a 23-year period, we measured the remaining fraction of boles from 367 dead trees from 26 neotropical species widely varying in wood density (0.23-1.24 g cm(-3)) and tree circumference at death time (31.5-272.0 cm). We modeled decay rates within a Bayesian framework assuming a first order differential equation to model the decomposition process and tested for the effects of forest management (selective logging vs. unexploited), of mode of death (standing vs. downed) and of topographical levels (bottomlands vs. hillsides vs. hilltops) on wood decay rates. The general decay model predicts the observed remaining fraction of dead wood (R2 = 60%) with only two biological predictors: tree circumference at death time and wood specific density. Neither selective logging nor local topography had a differential effect on wood decay rates. Including the mode of death into the model revealed that standing dead trees decomposed faster than downed dead trees, but the gain of model accuracy remains rather marginal. Overall, these results suggest that the release of carbon from tropical dead trees to the atmosphere can be simply estimated using tree circumference at death time and wood density.

  3. An improved parameter estimation and comparison for soft tissue constitutive models containing an exponential function.

    Science.gov (United States)

    Aggarwal, Ankush

    2017-08-01

    Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.

  4. Semileptonic Decays of Heavy Lambda Baryons in a Quark Model

    Energy Technology Data Exchange (ETDEWEB)

    Winston Roberts; Muslema Pervin; Simon Capstick

    2005-03-01

    The semileptonic decays of {Lambda}{sub c} and {Lambda}{sub b} are treated in the framework of a constituent quark model. Both nonrelativistic and semirelativistic Hamiltonians are used to obtain the baryon wave functions from a fit to the spectra, and the wave functions are expanded in both the harmonic oscillator and Sturmian bases. The latter basis leads to form factors in which the kinematic dependence on q{sup 2} is in the form of multipoles, and the resulting form factors fall faster as a function of q{sup 2} in the available kinematic ranges. As a result, decay rates obtained in the two models using the Sturmian basis are significantly smaller than those obtained using the harmonic oscillator basis. In the case of the {Lambda}{sub c}, decay rates calculated using the Sturmian basis are closer to the experimentally reported rates. However, we find a semileptonic branching fraction for the {Lambda}{sub c} to decay to excited {Lambda}* states of 11% to 19%, in contradiction with what is assumed in available experimental analyses. Our prediction for the {Lambda}{sub b} semileptonic decays is that decays to the ground state {Lambda}{sub c} provide a little less than 70% of the total semileptonic decay rate. For the decays {Lambda}{sub b} {yields} {Lambda}{sub c}, the analytic form factors we obtain satisfy the relations expected from heavy-quark effective theory at the non-recoil point, at leading and next-to-leading orders in the heavy-quark expansion. In addition, some features of the heavy-quark limit are shown to naturally persist as the mass of the heavy quark in the daughter baryon is decreased.

  5. An exponential material model for prediction of the flow curves of several AZ series magnesium alloys in tension and compression

    International Nuclear Information System (INIS)

    Fereshteh-Saniee, F.; Barati, F.; Badnava, H.; Fallah Nejad, Kh.

    2012-01-01

    Highlights: ► The exponential model can represent flow behaviors of AZ series Mg alloys very well. ► Strain rate sensitivities of AZ series Mg alloys in compression are nearly the same. ► Effect of zinc element on tensile activation energy is higher than on compressive one. ► Activation energies of AZ80 and AZ81 in tension were greater than in compression. ► Tensile and compressive rate sensitivities of AZ80 are not close to each other. -- Abstract: This paper is concerned with flow behaviors of several magnesium alloys, such as AZ31, AZ80 and AZ81, in tension and compression. The experiments were performed at elevated temperatures and for various strain rates. In order to eliminate the effect of inhomogeneous deformation in tensile and compression tests, the Bridgeman’s and numerical correction factors were respectively employed. A two-section exponential mathematical model was also utilized for prediction of flow stresses of different magnesium alloys in tension and compression. Moreover, based on the compressive flow model proposed, the peak stress and the relevant true strain could be estimated. The true stress and strain of the necking point can also be predicted using the corresponding relations. It was found that the flow behaviors estimated by the exponential flow model were encouragingly in very good agreement with experimental findings.

  6. Approximate models for the study of exponential changed quantities: Application on the plasma waves growth rate or damping

    International Nuclear Information System (INIS)

    Xaplanteris, C. L.; Xaplanteris, L. C.; Leousis, D. P.

    2014-01-01

    Many physical phenomena that concern the research these days are basically complicated because of being multi-parametric. Thus, their study and understanding meets with big if not unsolved obstacles. Such complicated and multi-parametric is the plasmatic state as well, where the plasma and the physical quantities that appear along with it have chaotic behavior. Many of those physical quantities change exponentially and at most times they are stabilized by presenting wavy behavior. Mostly in the transitive state rather than the steady state, the exponentially changing quantities (Growth, Damping etc) depend on each other in most cases. Thus, it is difficult to distinguish the cause from the result. The present paper attempts to help this difficult study and understanding by proposing mathematical exponential models that could relate with the study and understanding of the plasmatic wavy instability behavior. Such instabilities are already detected, understood and presented in previous publications of our laboratory. In other words, our new contribution is the study of the already known plasmatic quantities by using mathematical models (modeling and simulation). These methods are both useful and applicable in the chaotic theory. In addition, our ambition is to also conduct a list of models useful for the study of chaotic problems, such as those that appear into the plasma, starting with this paper's examples

  7. Approximate models for the study of exponential changed quantities: Application on the plasma waves growth rate or damping

    Energy Technology Data Exchange (ETDEWEB)

    Xaplanteris, C. L., E-mail: cxaplanteris@yahoo.com [Plasma Physics Laboratory, IMS, NCSR “Demokritos”, Athens, Greece and Hellenic Army Academy, Vari Attica (Greece); Xaplanteris, L. C. [School of Physics, National and Kapodistrian University of Athens, Athens (Greece); Leousis, D. P. [Technical High School of Athens, Athens (Greece)

    2014-03-15

    Many physical phenomena that concern the research these days are basically complicated because of being multi-parametric. Thus, their study and understanding meets with big if not unsolved obstacles. Such complicated and multi-parametric is the plasmatic state as well, where the plasma and the physical quantities that appear along with it have chaotic behavior. Many of those physical quantities change exponentially and at most times they are stabilized by presenting wavy behavior. Mostly in the transitive state rather than the steady state, the exponentially changing quantities (Growth, Damping etc) depend on each other in most cases. Thus, it is difficult to distinguish the cause from the result. The present paper attempts to help this difficult study and understanding by proposing mathematical exponential models that could relate with the study and understanding of the plasmatic wavy instability behavior. Such instabilities are already detected, understood and presented in previous publications of our laboratory. In other words, our new contribution is the study of the already known plasmatic quantities by using mathematical models (modeling and simulation). These methods are both useful and applicable in the chaotic theory. In addition, our ambition is to also conduct a list of models useful for the study of chaotic problems, such as those that appear into the plasma, starting with this paper's examples.

  8. Perturbation theory in angular quantization approach and the expectation values of exponential fields in sine-Gordon model

    International Nuclear Information System (INIS)

    Poghossian, R.H.

    2000-01-01

    In an angular quantization approach a perturbation theory for the Massive Thirring Model (MTM) is developed, which allows us to calculate vacuum expectation values of exponential fields in sine-Gordon theory near the free fermion point in first order of the MTM coupling constant g. The Hankel transforms play an important role when carrying out these calculations. The expression we have found coincides with that of the direct expansion over g of the exact formula conjectured by Lukyanov and Zamolodchikov

  9. Enhancement of Markov chain model by integrating exponential smoothing: A case study on Muslims marriage and divorce

    Science.gov (United States)

    Jamaluddin, Fadhilah; Rahim, Rahela Abdul

    2015-12-01

    Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.

  10. Continuous multivariate exponential extension

    International Nuclear Information System (INIS)

    Block, H.W.

    1975-01-01

    The Freund-Weinman multivariate exponential extension is generalized to the case of nonidentically distributed marginal distributions. A fatal shock model is given for the resulting distribution. Results in the bivariate case and the concept of constant multivariate hazard rate lead to a continuous distribution related to the multivariate exponential distribution (MVE) of Marshall and Olkin. This distribution is shown to be a special case of the extended Freund-Weinman distribution. A generalization of the bivariate model of Proschan and Sullo leads to a distribution which contains both the extended Freund-Weinman distribution and the MVE

  11. Modeling the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity values.

    Science.gov (United States)

    Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen

    2010-05-01

    The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.

  12. Modeling the Lag Period and Exponential Growth of Listeria monocytogenes under Conditions of Fluctuating Temperature and Water Activity Values▿

    Science.gov (United States)

    Muñoz-Cuevas, Marina; Fernández, Pablo S.; George, Susan; Pin, Carmen

    2010-01-01

    The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (aw) values. To model the duration of the lag phase, the dependence of the parameter h0, which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or aw were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase. PMID:20208022

  13. Exponential H and T decay of the critical current density in YBa2Cu3O7√/sub δ/ single crystals

    International Nuclear Information System (INIS)

    Senoussi, S.; Oussena, M.; Collin, G.; Campbell, I.A.

    1988-01-01

    We report magnetic measurements on single crystals of YBa 2 Cu 3 O 7 √/sub δ/. The magnetic critical current density in the Cu-O basal planes (1.5 x 10 6 Acm 2 at 4.2 K) decreases exponentially with temperature as well as with field for Tapprox. >50 K. This is ascribed to current tunneling through micro- Josephson-junctions. The behavior is radically different from that associated with macrojunctions typical of ''granular'' samples. It is argued that the anisotropy and the T-H anomalous behavior of J/sub c/ are connected with the T dependence and the anisotropy of both the coherence length and the electron mean free path

  14. Modified Exponential (MOE) Models: statistical Models for Risk Estimation of Low dose Rate Radiation

    International Nuclear Information System (INIS)

    Ogata, H.; Furukawa, C.; Kawakami, Y.; Magae, J.

    2004-01-01

    Simultaneous inclusion of dose and dose-rate is required to evaluate the risk of long term irradiation at low dose-rates, since biological responses to radiation are complex processes that depend both on irradiation time and total dose. Consequently, it is necessary to consider a model including cumulative dose,dose-rate and irradiation time to estimate quantitative dose-response relationship on the biological response to radiation. In this study, we measured micronucleus formation and (3H) thymidine uptake in U2OS, human osteosarcoma cell line, as indicators of biological response to gamma radiation. Cells were exposed to gamma ray in irradiation room bearing 50,000 Ci 60Co. After irradiation, they were cultured for 24h in the presence of cytochalasin B to block cytokinesis, and cytoplasm and nucleus were stained with DAPI and propidium iodide. The number of binuclear cells bearing a micronucleus was counted under a florescence microscope. For proliferation inhibition, cells were cultured for 48 h after the irradiation and (3h) thymidine was pulsed for 4h before harvesting. We statistically analyzed the data for quantitative evaluation of radiation risk at low dose/dose-rate. (Author)

  15. Production, decay, and mixing models of the iota meson. II

    International Nuclear Information System (INIS)

    Palmer, W.F.; Pinsky, S.S.

    1987-01-01

    A five-channel mixing model for the ground and radially excited isoscalar pseudoscalar states and a glueball is presented. The model extends previous work by including two-body unitary corrections, following the technique of Toernqvist. The unitary corrections include contributions from three classes of two-body intermediate states: pseudoscalar-vector, pseudoscalar-scalar, and vector-vector states. All necessary three-body couplings are extracted from decay data. The solution of the mixing model provides information about the bare mass of the glueball and the fundamental quark-glue coupling. The solution also gives the composition of the wave function of the physical states in terms of the bare quark and glue states. Finally, it is shown how the coupling constants extracted from decay data can be used to calculate the decay rates of the five physical states to all two-body channels

  16. Simple model for decay of laser generated shock waves

    International Nuclear Information System (INIS)

    Trainor, R.J.

    1980-01-01

    A simple model is derived to calculate the hydrodynamic decay of laser-generated shock waves. Comparison with detailed hydrocode simulations shows good agreement between calculated time evolution of shock pressure, position, and instantaneous pressure profile. Reliability of the model decreases in regions of the target where superthermal-electron preheat effects become comparable to shock effects

  17. Decaying and kicked turbulence in a shell model

    DEFF Research Database (Denmark)

    Hooghoudt, Jan Otto; Lohse, Detlef; Toschi, Federico

    2001-01-01

    Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account for the ens......Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account...

  18. Assessment of Ex-Vitro Anaerobic Digestion Kinetics of Crop Residues Through First Order Exponential Models: Effect of LAG Phase Period and Curve Factor

    Directory of Open Access Journals (Sweden)

    Abdul Razaque Sahito

    2013-04-01

    Full Text Available Kinetic studies of AD (Anaerobic Digestion process are useful to predict the performance of digesters and design appropriate digesters and also helpful in understanding inhibitory mechanisms of biodegradation. The aim of this study was to assess the anaerobic kinetics of crop residues digestion with buffalo dung. Seven crop residues namely, bagasse, banana plant waste, canola straw, cotton stalks, rice straw, sugarcane trash and wheat straw were selected from the field and were analyzed on MC (Moisture Contents, TS (Total Solids and VS (Volatile Solids with standard methods. In present study, three first order exponential models namely exponential model, exponential lag phase model and exponential curve factor model were used to assess the kinetics of the AD process of crop residues and the effect of lag phase and curve factor was analyzed based on statistical hypothesis testing and on information theory. Assessment of kinetics of the AD of crop residues and buffalo dung follows the first order kinetics. Out of the three models, the simple exponential model was the poorest model, while the first order exponential curve factor model is the best fit model. In addition to statistical hypothesis testing, the exponential curve factor model has least value of AIC (Akaike's Information Criterion and can generate methane production data more accurately. Furthermore, there is an inverse linear relationship between the lag phase period and the curve factor.

  19. Assessment of ex-vitro anaerobic digestion kinetics of crop residues through first order exponential models: effect of lag phase period and curve factor

    International Nuclear Information System (INIS)

    Sahito, A.R.; Brohi, K.M.

    2013-01-01

    Kinetic studies of AD (Anaerobic Digestion) process are useful to predict the performance of digesters and design appropriate digesters and also helpful in understanding inhibitory mechanisms of biodegradation. The aim of this study was to assess the anaerobic kinetics of crop residues digestion with buffalo dung. Seven crop residues namely, bagasse, banana plant waste, canola straw, cotton stalks, rice straw, sugarcane trash and wheat straw were selected from the field and were analyzed on MC (Moisture Contents), TS (Total Solids) and VS (Volatile Solids) with standard methods. In present study, three first order exponential models namely exponential model, exponential lag phase model and exponential curve factor model were used to assess the kinetics of the AD process of crop residues and the effect of lag phase and curve factor was analyzed based on statistical hypothesis testing and on information theory. Assessment of kinetics of the AD of crop residues and buffalo dung follows the first order kinetics. Out of the three models, the simple exponential model was the poorest model, while the first order exponential curve factor model is the best fit model. In addition to statistical hypothesis testing, the exponential curve factor model has least value of AIC (Akaike's Information Criterion) and can generate methane production data more accurately. Furthermore, there is an inverse linear relationship between the lag phase period and the curve factor. (author)

  20. Exponential Family Functional data analysis via a low-rank model.

    Science.gov (United States)

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  1. Forecasting Inflow and Outflow of Money Currency in East Java Using a Hybrid Exponential Smoothing and Calendar Variation Model

    Science.gov (United States)

    Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut

    2018-03-01

    Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.

  2. Cluster model calculations of alpha decays across the periodic table

    International Nuclear Information System (INIS)

    Merchant, A.C.; Buck, B.

    1988-10-01

    The cluster model of Buck, Dover and Vary has been used to calculate partial widths for alpha decay from the ground states of all nuclei for which experimental measurements exist. The cluster-core potential is represented by a simple three-parameter form having fixed diffuseness, a radius which scales as A 1/3 and a depth which is adjusted to fit the Q-value of the particular decay. The calculations yield excellent agreement with the vast majority of the available data, and some typical examples are presented. (author) [pt

  3. Spectra of Anderson type models with decaying randomness

    Indian Academy of Sciences (India)

    Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45

    Our models include potentials decaying in all directions in which case ..... the free operators with some uniform bounds of low moments of the measure µ weighted ..... We have the following inequality coming out of Cauchy–Schwarz and Fubini, ... The required statement on the limit follows if we now show that the quantity in ...

  4. Ruling out exotic models of b quark decay

    International Nuclear Information System (INIS)

    Chen, A.; Goldberg, M.; Horwitz, N.; Jawahery, A.; Jibaly, M.; Kooy, H.; Lipari, P.; Moneti, G.C.; Van Hecke, H.; Alam, M.S.; Csorna, S.E.; Fridman, A.; Mestayer, M.; Panvini, R.S.; Andrews, D.; Avery, P.; Berkelman, K.; Cassel, D.G.; DeWire, J.W.; Ehrlich, R.; Ferguson, T.; Galik, R.; Gilchriese, M.G.D.; Gittelman, B.; Hartill, D.L.; Herrup, D.; Herzlinger, M.; Holner, S.; Ito, M.; Kandaswamy, J.; Kistiakowsky, V.; Kreinick, D.L.; Kubota, Y.; Mistry, N.B.; Morrow, F.; Nordberg, E.; Ogg, M.; Perchonok, R.; Plunckett, R.; Silverman, A.; Stein, P.C.; Stone, S.; Weber, D.; Wilcke, R.; Sadoff, A.J.; Bebek, C.; Haggerty, J.; Hassard, J.; Hempstead, M.; Izen, J.M.; MacKay, W.W.; Pipkin, F.M.; Rohlf, J.; Wilson, R.; Kagan, H.; Chadwick, K.; Chauveau, J.; Ganci, P.; Gentile, T.; Guida, J.A.; Kass, R.; Melissinos, A.C.; Olsen, S.L.; Parkhurst, G.; Poling, R.; Rosenfeld, C.; Rucinski, G.; Thorndike, E.H.; Green, J.; Hicks, R.G.; Sannes, F.; Skubic, P.; Snyder, A.; Stone, R.

    1983-01-01

    We consider three broad classes of nonstandard models for b quark decay: (1) b->llq with charged or neutral leptons of arbitrary flavor; (2) b->lanti qanti q; and (3) b->qa - where a - is a Higgs boson or hyperpion. For these classes of models we have calculated the charged energy fraction and the inclusive yields of electrons, muons, protons, and lambdas. We demonstrate that these model predictions are inconsistent with CLEO measurements at the T(4S). (orig.)

  5. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  6. Circumscribing late dark matter decays model-independently

    International Nuclear Information System (INIS)

    Yueksel, Hasan; Kistler, Matthew D.

    2008-01-01

    A number of theories, spanning a wide range of mass scales, predict dark matter candidates that have lifetimes much longer than the age of the Universe, yet may produce a significant flux of gamma rays in their decays today. We constrain such late-decaying dark matter scenarios model-independently by utilizing gamma-ray line emission limits from the Galactic Center region obtained with the SPI spectrometer on INTEGRAL, and the determination of the isotropic diffuse photon background by SPI, COMPTEL, and EGRET observations. We show that no more than ∼5% of the unexplained MeV background can be produced by late dark matter decays either in the Galactic halo or cosmological sources.

  7. EOQ Model for Deteriorating Items with exponential time dependent Demand Rate under inflation when Supplier Credit Linked to Order Quantity

    Directory of Open Access Journals (Sweden)

    Rakesh Prakash Tripathi

    2014-05-01

    Full Text Available In paper (2004 Chang studied an inventory model under a situation in which the supplier provides the purchaser with a permissible delay of payments if the purchaser orders a large quantity. Tripathi (2011 also studied an inventory model with time dependent demand rate under which the supplier provides the purchaser with a permissible delay in payments. This paper is motivated by Chang (2004 and Tripathi (2011 paper extending their model for exponential time dependent demand rate. This study develops an inventory model under which the vendor provides the purchaser with a credit period; if the purchaser orders large quantity. In this chapter, demand rate is taken as exponential time dependent. Shortages are not allowed and effect of the inflation rate has been discussed. We establish an inventory model for deteriorating items if the order quantity is greater than or equal to a predetermined quantity. We then obtain optimal solution for finding optimal order quantity, optimal cycle time and optimal total relevant cost. Numerical examples are given for all different cases. Sensitivity of the variation of different parameters on the optimal solution is also discussed. Mathematica 7 software is used for finding numerical examples.

  8. Various decays of some hadronic systems in constituent quark models

    International Nuclear Information System (INIS)

    Bonnaz, R.

    2001-09-01

    The topic of this study is the decay of mesons in constituent quark models. Those models as well as the various quark-antiquark interaction potentials are presented. Strong decay of a meson into two or three mesons is studied in the second part. The original 3 P o model is presented as well as the research of a vertex function γ(p) depending on the momentum for the created qq-bar pair. We show that a function γ(p) of constant+Gaussian type is superior than the constant usually used. The second part is dedicated to electromagnetic transitions studied through the emission of a real or a virtual photon. In the case of real photon emission, the different approximations found in the literature are reviewed and compared to the formalism going beyond the long wave length approximation. Mixing angles are tested for some mesons. In the case of virtual photon, the expression of decay width obtained by van Royen and Weisskopf is re-demonstrated and then improved by taking into account the quark momentum distribution inside the meson. An electromagnetic dressing of quarks is introduced that improves the results. All along this study, wave functions of various sophistication degrees are used. The results of decay widths are compared to a large bulk of experimental data. (author)

  9. The decay width of stringy hadrons

    Directory of Open Access Journals (Sweden)

    Jacob Sonnenschein

    2018-02-01

    We fit the theoretical decay width to experimental data for mesons on the trajectories of ρ, ω, π, η, K⁎, ϕ, D, and Ds⁎, and of the baryons N, Δ, Λ, and Σ. We examine both the linearity in L and the exponential suppression factor. The linearity was found to agree with the data well for mesons but less for baryons. The extracted coefficient for mesons A=0.095±0.015 is indeed quite universal. The exponential suppression was applied to both strong and radiative decays. We discuss the relation with string fragmentation and jet formation. We extract the quark–diquark structure of baryons from their decays. A stringy mechanism for Zweig suppressed decays of quarkonia is proposed and is shown to reproduce the decay width of ϒ states. The dependence of the width on spin and flavor symmetry is discussed. We further apply this model to the decays of glueballs and exotic hadrons.

  10. Exponential vanishing of the ground-state gap of the quantum random energy model via adiabatic quantum computing

    Science.gov (United States)

    Adame, J.; Warzel, S.

    2015-11-01

    In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.

  11. Exponential vanishing of the ground-state gap of the quantum random energy model via adiabatic quantum computing

    International Nuclear Information System (INIS)

    Adame, J.; Warzel, S.

    2015-01-01

    In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM

  12. B meson decays to baryons in the diquark model

    International Nuclear Information System (INIS)

    Chang, C.H.V.; Hou, W.S.

    2002-01-01

    We study B meson decays to two charmless baryons in the diquark model, including strong and electroweak penguins as well as the tree operators. It is shown that penguin operators can enhance anti B→B s anti B considerably, but affect anti B→B 1 anti B 2 only slightly, where B (1,2) and B s are non-strange and strange baryons, respectively. The γ dependence of the decay rates due to tree-penguin interference is illustrated. In principle, some of the B s anti B modes could dominate over B 1 anti B 2 for γ>90 , but in general the effect is milder than their mesonic counterparts. This is because the O 6 operator can only produce vector but not scalar diquarks, while the opposite is true for O 1 and O 4 . Predictions from the diquark model are compared to those from the sum rule calculation. The decays anti B→B s anti B s and inclusive baryonic decays are also discussed. (orig.)

  13. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    Science.gov (United States)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  14. Modeling steady-state dynamics of macromolecules in exponential-stretching flow using multiscale molecular-dynamics-multiparticle-collision simulations.

    Science.gov (United States)

    Ghatage, Dhairyasheel; Chatterji, Apratim

    2013-10-01

    We introduce a method to obtain steady-state uniaxial exponential-stretching flow of a fluid (akin to extensional flow) in the incompressible limit, which enables us to study the response of suspended macromolecules to the flow by computer simulations. The flow field in this flow is defined by v(x) = εx, where v(x) is the velocity of the fluid and ε is the stretch flow gradient. To eliminate the effect of confining boundaries, we produce the flow in a channel of uniform square cross section with periodic boundary conditions in directions perpendicular to the flow, but simultaneously maintain uniform density of fluid along the length of the tube. In experiments a perfect elongational flow is obtained only along the axis of symmetry in a four-roll geometry or a filament-stretching rheometer. We can reproduce flow conditions very similar to extensional flow near the axis of symmetry by exponential-stretching flow; we do this by adding the right amounts of fluid along the length of the flow in our simulations. The fluid particles added along the length of the tube are the same fluid particles which exit the channel due to the flow; thus mass conservation is maintained in our model by default. We also suggest a scheme for possible realization of exponential-stretching flow in experiments. To establish our method as a useful tool to study various soft matter systems in extensional flow, we embed (i) spherical colloids with excluded volume interactions (modeled by the Weeks-Chandler potential) as well as (ii) a bead-spring model of star polymers in the fluid to study their responses to the exponential-stretched flow and show that the responses of macromolecules in the two flows are very similar. We demonstrate that the variation of number density of the suspended colloids along the direction of flow is in tune with our expectations. We also conclude from our study of the deformation of star polymers with different numbers of arms f that the critical flow gradient ε

  15. THE EXPONENTIAL STABILIZATION FOR A SEMILINEAR WAVE EQUATION WITH LOCALLY DISTRIBUTED FEEDBACK

    Institute of Scientific and Technical Information of China (English)

    JIA CHAOHUA; FENG DEXING

    2005-01-01

    This paper considers the exponential decay of the solution to a damped semilinear wave equation with variable coefficients in the principal part by Riemannian multiplier method. A differential geometric condition that ensures the exponential decay is obtained.

  16. Additivity of statistical moments in the exponentially modified Gaussian model of chromatography

    International Nuclear Information System (INIS)

    Howerton, Samuel B.; Lee Chomin; McGuffin, Victoria L.

    2002-01-01

    A homologous series of saturated fatty acids ranging from C 10 to C 22 was separated by reversed-phase capillary liquid chromatography. The resultant zone profiles were found to be fit best by an exponentially modified Gaussian (EMG) function. To compare the EMG function and statistical moments for the analysis of the experimental zone profiles, a series of simulated profiles was generated by using fixed values for retention time and different values for the symmetrical (σ) and asymmetrical (τ) contributions to the variance. The simulated profiles were modified with respect to the integration limits, the number of points, and the signal-to-noise ratio. After modification, each profile was analyzed by using statistical moments and an iteratively fit EMG equation. These data indicate that the statistical moment method is much more susceptible to error when the degree of asymmetry is large, when the integration limits are inappropriately chosen, when the number of points is small, and when the signal-to-noise ratio is small. The experimental zone profiles were then analyzed by using the statistical moment and EMG methods. Although care was taken to minimize the sources of error discussed above, significant differences were found between the two methods. The differences in the second moment suggest that the symmetrical and asymmetrical contributions to broadening in the experimental zone profiles are not independent. As a consequence, the second moment is not equal to the sum of σ 2 and τ 2 , as is commonly assumed. This observation has important implications for the elucidation of thermodynamic and kinetic information from chromatographic zone profiles

  17. A quantification of the hazards of fitting sums of exponentials to noisy data

    International Nuclear Information System (INIS)

    Bromage, G.E.

    1983-06-01

    The ill-conditioned nature of sums-of-exponentials analyses is confirmed and quantified, using synthetic noisy data. In particular, the magnification of errors is plotted for various two-exponential models, to illustrate its dependence on the ratio of decay constants, and on the ratios of amplitudes of the contributing terms. On moving from two- to three-exponential models, the condition deteriorates badly. It is also shown that the use of 'direct' Prony-type analyses (rather than general iterative nonlinear optimisation) merely aggravates the condition. (author)

  18. Production, decay, and mixing models of the iota meson

    International Nuclear Information System (INIS)

    Palmer, W.F.; Pinsky, S.S.; Bender, C.

    1984-01-01

    We solve a five-channel mixing problem involving eta, eta', zeta(1275), iota(1440), and a new hypothetical high-mass pseudoscalar state between 1600 and 1900 MeV. We obtain the quark and glue content of iota(1440). We compare two solutions to the mixing problem with iota(1440) production and decay data, and with quark-model predictions for bare masses. In one solution the iota(1440) is primarily a glueball. This solution is preferred by the production and decay data. In the other solution the iota(1440) is a radially excited (ss-bar) state. This solution is preferred by the quark-model picture for the bare masses. We judge the weight of the combined evidence to favor the glueball interpretation

  19. An Exponential Tilt Mixture Model for Time-to-Event Data to Evaluate Treatment Effect Heterogeneity in Randomized Clinical Trials.

    Science.gov (United States)

    Wang, Chi; Tan, Zhiqiang; Louis, Thomas A

    2014-01-01

    Evaluating the effect of a treatment on a time-to-event outcome is the focus of many randomized clinical trials. It is often observed that the treatment effect is heterogeneous, where only a subgroup of the patients may respond to the treatment due to some unknown mechanism such as genetic polymorphism. In this paper, we propose a semiparametric exponential tilt mixture model to estimate the proportion of patients who respond to the treatment and to assess the treatment effect. Our model is a natural extension of parametric mixture models to a semiparametric setting with a time-to-event outcome. We propose a nonparametric maximum likelihood estimation approach for inference and establish related asymptotic properties. Our method is illustrated by a randomized clinical trial on biodegradable polymer-delivered chemotherapy for malignant gliomas patients.

  20. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  1. Improvement of mobility edge model by using new density of states with exponential tail for organic diode

    International Nuclear Information System (INIS)

    Muhammad Ammar Khan; Sun Jiu-Xun

    2015-01-01

    The mobility edge (ME) model with single Gaussian density of states (DOS) is simplified based on the recent experimental results about the Einstein relationship. The free holes are treated as being non-degenerate, and the trapped holes are dealt with as being degenerate. This enables the integral for the trapped holes to be easily realized in a program. The J–V curves are obtained through solving drift-diffusion equations. When this model is applied to four organic diodes, an obvious deviation between theoretical curves and experimental data is observed. In order to solve this problem, a new DOS with exponential tail is proposed. The results show that the consistence between J–V curves and experimental data based on a new DOS is far better than that based on the Gaussian DOS. The variation of extracted mobility with temperature can be well described by the Arrhenius relationship. (paper)

  2. Lepton-flavor violating B decays in generic Z' models

    Science.gov (United States)

    Crivellin, Andreas; Hofer, Lars; Matias, Joaquim; Nierste, Ulrich; Pokorski, Stefan; Rosiek, Janusz

    2015-09-01

    LHCb has reported deviations from the Standard Model in b →s μ+μ- transitions for which a new neutral gauge boson is a prime candidate for an explanation. As this gauge boson has to couple in a flavor nonuniversal way to muons and electrons in order to explain RK, it is interesting to examine the possibility that also lepton flavor is violated, especially in the light of the CMS excess in h →τ±μ∓. In this article, we investigate the perspectives to discover the lepton-flavor violating modes B →K(*)τ±μ∓ , Bs→τ±μ∓ and B →K(*)μ±e∓, Bs→μ±e∓. For this purpose we consider a simplified model in which new-physics effects originate from an additional neutral gauge boson (Z') with generic couplings to quarks and leptons. The constraints from τ →3 μ , τ →μ ν ν ¯, μ →e γ , gμ-2 , semileptonic b →s μ+μ- decays, B →K(*)ν ν ¯ and Bs-B¯s mixing are examined. From these decays, we determine upper bounds on the decay rates of lepton-flavor violating B decays. Br (B →K ν ν ¯) limits the branching ratios of lepton-flavor violating B decays to be smaller than 8 ×10-5(2 ×10-5) for vectorial (left-handed) lepton couplings. However, much stronger bounds can be obtained by a combined analysis of Bs-B¯s, τ →3 μ , τ →μ ν ν ¯ and other rare decays. The bounds depend on the amount of fine-tuning among the contributions to Bs-B¯s mixing. Allowing for a fine-tuning at the percent level we find upper bounds of the order of 10-6 for branching ratios into τ μ final states, while Bs→μ±e∓ is strongly suppressed and only B →K(*)μ±e∓ can be experimentally accessible (with a branching ratio of order 10-7).

  3. A partial exponential lumped parameter model to evaluate groundwater age distributions and nitrate trends in long-screened wells

    Science.gov (United States)

    Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.

    2016-01-01

    A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.

  4. Two Higgs doublet models and b → s exclusive decays

    Energy Technology Data Exchange (ETDEWEB)

    Arnan, Pere; Mescia, Federico [Universitat de Barcelona, Departament de Fisica Quantica i Astrofisica (FQA), Institut de Ciencies del Cosmos (ICCUB), Barcelona (Spain); Becirevic, Damir [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique, Orsay (France); Sumensari, Olcyr [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique, Orsay (France); Universidade de Sao Paulo, Instituto de Fisica, Sao Paulo (Brazil)

    2017-11-15

    We computed the leading order Wilson coefficients relevant to all the exclusive b → sl{sup +}l{sup -} decays in the framework of the two Higgs doublet model (2HDM) with a softly broken Z{sub 2} symmetry by including the O(m{sub b}) corrections. We elucidate the issue of appropriate matching between the full and the effective theory when dealing with the (pseudo-)scalar operators for which keeping the external momenta different from zero is necessary. We then make a phenomenological analysis by using the measured B(B{sub s} → μ{sup +}μ{sup -}) and B(B → Kμ{sup +}μ{sup -}){sub high-q{sup 2}}, for which the hadronic uncertainties are well controlled, and we discuss their impact on various types of 2HDM. A brief discussion of the decays with τ-leptons in the final state is provided too. (orig.)

  5. Higgs decays to dark matter: Beyond the minimal model

    International Nuclear Information System (INIS)

    Pospelov, Maxim; Ritz, Adam

    2011-01-01

    We examine the interplay between Higgs mediation of dark-matter annihilation and scattering on one hand and the invisible Higgs decay width on the other, in a generic class of models utilizing the Higgs portal. We find that, while the invisible width of the Higgs to dark matter is now constrained for a minimal singlet scalar dark matter particle by experiments such as XENON100, this conclusion is not robust within more generic examples of Higgs mediation. We present a survey of simple dark matter scenarios with m DM h /2 and Higgs portal mediation, where direct-detection signatures are suppressed, while the Higgs width is still dominated by decays to dark matter.

  6. Statistical analysis of time-resolved emission from ensembles of semiconductor quantum dots: interpretations of exponantial decay models

    NARCIS (Netherlands)

    van Driel, A.F.; Nikolaev, I.; Vergeer, P.; Lodahl, P.; Vanmaekelbergh, D.; Vos, Willem L.

    2007-01-01

    We present a statistical analysis of time-resolved spontaneous emission decay curves from ensembles of emitters, such as semiconductor quantum dots, with the aim of interpreting ubiquitous non-single-exponential decay. Contrary to what is widely assumed, the density of excited emitters and the

  7. Semileptonic B decays in the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Wick, Michael

    2010-09-15

    In this thesis we study several aspects of decays based on the quark level transitions b{yields}s{nu}anti {nu} and b{yields}s{mu}{sup +}{mu}{sup -} as well as transition form factors for radiative and rare semileptonic B meson decays. The quark level transition b{yields}s{nu}anti {nu} offers a transparent study of Z penguin and other electroweak penguin effects in New Physics (NP) scenarios in the absence of dipole operator contributions and Higgs penguin contributions. We present an analysis of B{yields}K*{nu}anti {nu} with improved form factors and of the decays B{yields}K{nu}anti {nu} and B{yields}X{sub s}{nu}anti {nu} in the Standard Model (SM) and in a number of NP scenarios like the general Minimal Supersymmetric Standard Model (MSSM), general scenarios with modified Z/Z{sup '} penguins and in a singlet scalar extension of the SM. The results for the SM and NP scenarios can be transparently visualized in a ({epsilon};{eta}) plane. The rare decay B{yields}K*({yields}K{pi}){mu}{sup +}{mu}{sup -} is regarded as one of the crucial channels for B physics as it gives rise to a multitude of observables. We investigate systematically the often correlated effects in these observables in the context of the SM and various NP models, in particular the Littlest Higgs model with T-parity and various MSSM scenarios and identify those observables with small to moderate dependence on hadronic quantities and large impact of NP. Furthermore, we study transition form factors for radiative and rare semi-leptonic B-meson decays into light pseudoscalar or vector mesons, combining theoretical and phenomenological constraints from Lattice QCD, light-cone sum rules, and dispersive bounds. We pay particular attention to form factor parameterizations which are based on the so-called series expansion, and study the related systematic uncertainties on a quantitative level. In this analysis as well as in the analysis of the b{yields}s transitions, we use consistently a convenient form

  8. pETM: a penalized Exponential Tilt Model for analysis of correlated high-dimensional DNA methylation data.

    Science.gov (United States)

    Sun, Hokeun; Wang, Ya; Chen, Yong; Li, Yun; Wang, Shuang

    2017-06-15

    DNA methylation plays an important role in many biological processes and cancer progression. Recent studies have found that there are also differences in methylation variations in different groups other than differences in methylation means. Several methods have been developed that consider both mean and variance signals in order to improve statistical power of detecting differentially methylated loci. Moreover, as methylation levels of neighboring CpG sites are known to be strongly correlated, methods that incorporate correlations have also been developed. We previously developed a network-based penalized logistic regression for correlated methylation data, but only focusing on mean signals. We have also developed a generalized exponential tilt model that captures both mean and variance signals but only examining one CpG site at a time. In this article, we proposed a penalized Exponential Tilt Model (pETM) using network-based regularization that captures both mean and variance signals in DNA methylation data and takes into account the correlations among nearby CpG sites. By combining the strength of the two models we previously developed, we demonstrated the superior power and better performance of the pETM method through simulations and the applications to the 450K DNA methylation array data of the four breast invasive carcinoma cancer subtypes from The Cancer Genome Atlas (TCGA) project. The developed pETM method identifies many cancer-related methylation loci that were missed by our previously developed method that considers correlations among nearby methylation loci but not variance signals. The R package 'pETM' is publicly available through CRAN: http://cran.r-project.org . sw2206@columbia.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. A Computer-Assisted Learning Model Based on the Digital Game Exponential Reward System

    Science.gov (United States)

    Moon, Man-Ki; Jahng, Surng-Gahb; Kim, Tae-Yong

    2011-01-01

    The aim of this research was to construct a motivational model which would stimulate voluntary and proactive learning using digital game methods offering players more freedom and control. The theoretical framework of this research lays the foundation for a pedagogical learning model based on digital games. We analyzed the game reward system, which…

  10. Precompound decay models for medium energy nuclear reactions

    International Nuclear Information System (INIS)

    Blann, M.

    1989-11-01

    The formulations used for precompound decay models are presented and explained in terms of the physics of the intranuclear cascade model. Several features of spectra of medium energy (10--1000 MeV) reactions are summarized. Results of precompound plus evaporation calculations from the code ALICE are compared with a wide body of proton, alpha, and heavy ion induced reaction data to illustrate both the power and deficiencies of predicting yield of these reactions in the medium energy regime. 23 refs., 13 figs

  11. Searches for rare and non-Standard-Model decays of the Higgs boson

    CERN Document Server

    Sun, Xiaohu; The ATLAS collaboration

    2018-01-01

    Theories beyond the Standard Model predict Higgs boson decays at a much enhanced rate compared to the Standard Model, e.g. for decays to Z+photon or a meson and a photon, or decays that do not exist in the Standard Model, such as decays into two light bosons (a). This talk presents recent results based on 36 fb-1 of pp collision data collected at 13 TeV.

  12. On stability of exponential cosmological solutions with non-static volume factor in the Einstein-Gauss-Bonnet model

    Energy Technology Data Exchange (ETDEWEB)

    Ivashchuk, V.D. [VNIIMS, Center for Gravitation and Fundamental Metrology, Moscow (Russian Federation); Peoples' Friendship University of Russia (RUDN University), Institute of Gravitation and Cosmology, Moscow (Russian Federation)

    2016-08-15

    A (n + 1)-dimensional gravitational model with Gauss-Bonnet term and a cosmological constant term is considered. When ansatz with diagonal cosmological metrics is adopted, the solutions with an exponential dependence of the scale factors, a{sub i} ∝ exp(v{sup i}t), i = 1,.., n, are analyzed for n > 3. We study the stability of the solutions with non-static volume factor, i.e. K(v) = sum {sub k=1}{sup n} v{sup k} ≠ 0. We prove that under a certain restriction R imposed solutions with K(v) > 0 are stable, while solutions with K(v) < 0 are unstable. Certain examples of stable solutions are presented. We show that the solutions with v{sup 1} = v{sup 2} = v{sup 3} = H > 0 and zero variation of the effective gravitational constant are stable if the restriction R is obeyed. (orig.)

  13. Quantitative Analysis of Memristance Defined Exponential Model for Multi-bits Titanium Dioxide Memristor Memory Cell

    Directory of Open Access Journals (Sweden)

    DAOUD, A. A. D.

    2016-05-01

    Full Text Available The ability to store multiple bits in a single memristor based memory cell is a key feature for high-capacity memory packages. Studying multi-bit memristor circuits requires high accuracy in modelling the memristance change. A memristor model based on a novel definition of memristance is proposed. A design of a single memristor memory cell using the proposed model for the platinum electrodes titanium dioxide memristor is illustrated. A specific voltage pulse is used with varying its parameters (amplitude or pulse width to store different number of states in a single memristor. New state variation parameters associated with the utilized model are provided and their effects on write and read processes of memristive multi-states are analysed. PSPICE simulations are also held, and they show a good agreement with the data obtained from the analysis.

  14. Exponential sinusoidal model for predicting temperature inside underground wine cellars from a Spanish region

    Energy Technology Data Exchange (ETDEWEB)

    Mazarron, Fernando R.; Canas, Ignacio [Departamento de Construccion y Vias Rurales, Escuela Tecnica Superior de Ingenieros Agronomos, Universidad Politecnica de Madrid, Avda. Complutense s/n, 28040 Madrid (Spain)

    2008-07-01

    This article develops a mathematical model for determining the annual cycle of air temperature inside traditional underground wine cellars in the Spanish region of ''Ribera del Duero'', known because of the quality of its wines. It modifies the sinusoidal analytical model for soil temperature calculation. Results obtained when contrasting the proposed model with experimental data of three subterranean wine cellars for 2 years are satisfactory. The RMSE is below 1 C and the index of agreement is above 0.96 for the three cellars. When using the average of experimental data corresponding to the 2 years' time, results improve noticeably: the RMSE decreases by more than 30% and the mean d reaches 0.99. This model should be a useful tool for designing underground wine cellars making the most of soil energy advantages. (author)

  15. Parton model (Moessbauer) sum rules for b → c decays

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1993-01-01

    The parton model is a starting point or zero-order approximation in many treatments. The author follows an approach previously used for the Moessbauer effect and shows how parton model sum rules derived for certain moments of the lepton energy spectrum in b → c semileptonic decays remain valid even when binding effects are included. The parton model appears as a open-quote semiclassical close-quote model whose results for certain averages also hold (correspondence principle) in quantum mechanics. Algebraic techniques developed for the Moessbauer effect exploit simple features of the commutator between the weak current operator and the bound state Hamiltonian to find the appropriate sum rules and show the validity of the parton model in the classical limit, ℎ → 0, where all commutators vanish

  16. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  17. A note on exponential dispersion models which are invariant under length-biased sampling

    NARCIS (Netherlands)

    Bar-Lev, S.K.; van der Duyn Schouten, F.A.

    2003-01-01

    Length-biased sampling situations may occur in clinical trials, reliability, queueing models, survival analysis and population studies where a proper sampling frame is absent.In such situations items are sampled at rate proportional to their length so that larger values of the quantity being

  18. Inventory Model for Deteriorating Items Involving Fuzzy with Shortages and Exponential Demand

    Directory of Open Access Journals (Sweden)

    Sharmila Vijai Stanly

    2015-11-01

    Full Text Available This paper considers the fuzzy inventory model for deteriorating items for power demand under fully backlogged conditions. We define various factors which are affecting the inventory cost by using the shortage costs. An intention of this paper is to study the inventory modelling through fuzzy environment. Inventory parameters, such as holding cost, shortage cost, purchasing cost and deterioration cost are assumed to be the trapezoidal fuzzy numbers. In addition, an efficient algorithm is developed to determine the optimal policy, and the computational effort and time are small for the proposed algorithm. It is simple to implement, and our approach is illustrated through some numerical examples to demonstrate the application and the performance of the proposed methodology.

  19. Bayesian exponential random graph modeling of whole-brain structural networks across lifespan

    OpenAIRE

    Sinke, Michel R T; Dijkhuizen, Rick M; Caimo, Alberto; Stam, Cornelis J; Otte, Wim

    2016-01-01

    Descriptive neural network analyses have provided important insights into the organization of structural and functional networks in the human brain. However, these analyses have limitations for inter-subject or between-group comparisons in which network sizes and edge densities may differ, such as in studies on neurodevelopment or brain diseases. Furthermore, descriptive neural network analyses lack an appropriate generic null model and a unifying framework. These issues may be solved with an...

  20. Schematic model studies of double beta decay processes

    International Nuclear Information System (INIS)

    Civitarese, O.

    1996-01-01

    Some features of the nuclear matrix elements, for double beta decay transitions to a final ground state and to a final excited one and two-quadrupole phonon states, are presented and discussed in the framework of a schematic model. The competition between spin-flip and non-spin-flip transitions on the relevant nuclear matrix elements, the effects due to proton-neutron pairing correlations and the effects due to the inclusion of exchange terms in the QRPA matrix are discussed. (Author)

  1. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  2. Generalization of exponential based hyperelastic to hyper-viscoelastic model for investigation of mechanical behavior of rate dependent materials.

    Science.gov (United States)

    Narooei, K; Arman, M

    2018-03-01

    In this research, the exponential stretched based hyperelastic strain energy was generalized to the hyper-viscoelastic model using the heredity integral of deformation history to take into account the strain rate effects on the mechanical behavior of materials. The heredity integral was approximated by the approach of Goh et al. to determine the model parameters and the same estimation was used for constitutive modeling. To present the ability of the proposed hyper-viscoelastic model, the stress-strain response of the thermoplastic elastomer gel tissue at different strain rates from 0.001 to 100/s was studied. In addition to better agreement between the current model and experimental data in comparison to the extended Mooney-Rivlin hyper-viscoelastic model, a stable material behavior was predicted for pure shear and balance biaxial deformation modes. To present the engineering application of current model, the Kolsky bars impact test of gel tissue was simulated and the effects of specimen size and inertia on the uniform deformation were investigated. As the mechanical response of polyurea was provided over wide strain rates of 0.0016-6500/s, the current model was applied to fit the experimental data. The results were shown more accuracy could be expected from the current research than the extended Ogden hyper-viscoelastic model. In the final verification example, the pig skin experimental data was used to determine parameters of the hyper-viscoelastic model. Subsequently, a specimen of pig skin at different strain rates was loaded to a fixed strain and the change of stress with time (stress relaxation) was obtained. The stress relaxation results were revealed the peak stress increases by applied strain rate until the saturated loading rate and the equilibrium stress with magnitude of 0.281MPa could be reached. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Lake Area Analysis Using Exponential Smoothing Model and Long Time-Series Landsat Images in Wuhan, China

    Directory of Open Access Journals (Sweden)

    Gonghao Duan

    2018-01-01

    Full Text Available The loss of lake area significantly influences the climate change in a region, and this loss represents a serious and unavoidable challenge to maintaining ecological sustainability under the circumstances of lakes that are being filled. Therefore, mapping and forecasting changes in the lake is critical for protecting the environment and mitigating ecological problems in the urban district. We created an accessible map displaying area changes for 82 lakes in the Wuhan city using remote sensing data in conjunction with visual interpretation by combining field data with Landsat 2/5/7/8 Thematic Mapper (TM time-series images for the period 1987–2013. In addition, we applied a quadratic exponential smoothing model to forecast lake area changes in Wuhan city. The map provides, for the first time, estimates of lake development in Wuhan using data required for local-scale studies. The model predicted a lake area reduction of 18.494 km2 in 2015. The average error reached 0.23 with a correlation coefficient of 0.98, indicating that the model is reliable. The paper provided a numerical analysis and forecasting method to provide a better understanding of lake area changes. The modeling and mapping results can help assess aquatic habitat suitability and property planning for Wuhan lakes.

  4. An exponential strain dependent Rusinek–Klepaczko model for flow stress prediction in austenitic stainless steel 304 at elevated temperatures

    Directory of Open Access Journals (Sweden)

    Amit Kumar Gupta

    2014-10-01

    Full Text Available In this paper, to predict flow stress of Austenitic Stainless Steel (ASS 304 at elevated temperatures the extended Rusinek–Klepaczko (RK model has been modified using an exponential strain dependent term for dynamic strain aging (DSA region. Isothermal tensile tests are conducted on ASS 304 for a temperature range of 323–923 K with an interval of 50 K and at strain rates of 0.0001 s−1, 0.001 s−1, 0.01 s−1 and 0.1 s−1. DSA phenomenon is observed from 623 to 923 K at 0.0001 s−1, 0.001 s−1 and 0.01 s−1. Material constants are calculated using data obtained from these tensile tests for non-DSA and DSA region separately. The predicted results from the RK model are compared with the experimental data to check the accuracy of the constitutive relation. It is observed that to find out the constants of this model, some initial assumptions are required, and these initial values affect the predicted values. Hence, Genetic Algorithm (GA is used to optimize the constants for RK model. Statistical measures such as the correlation coefficient, the average absolute error and standard deviation are used to measure the accuracy of the model. The resulting values of the correlation coefficient for ASS 304 for non-DSA and DSA region using modified extended RK model are 0.9828 and 0.9701. This modified, extended RK model is compared with Johnson–Cook (JC, Zerilli–Armstrong (ZA and Arrhenius models and it is observed that specifically in DSA region, the modified extended RK model gives highly accurate predictions.

  5. A Nonlinear Super-Exponential Rational Model of Speculative Financial Bubbles

    Science.gov (United States)

    Sornette, D.; Andersen, J. V.

    Keeping a basic tenet of economic theory, rational expectations, we model the nonlinear positive feedback between agents in the stock market as an interplay between nonlinearity and multiplicative noise. The derived hyperbolic stochastic finite-time singularity formula transforms a Gaussian white noise into a rich time series possessing all the stylized facts of empirical prices, as well as accelerated speculative bubbles preceding crashes. We use the formula to invert the two years of price history prior to the recent crash on the Nasdaq (April 2000) and prior to the crash in the Hong Kong market associated with the Asian crisis in early 1994. These complex price dynamics are captured using only one exponent controlling the explosion, the variance and mean of the underlying random walk. This offers a new and powerful detection tool of speculative bubbles and herding behavior.

  6. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  7. Moment formalisms applied to a solvable model with a quantum phase transition (I). Exponential moment methods

    International Nuclear Information System (INIS)

    Witte, N.S.; Shankar, R.

    1999-01-01

    We examine the Ising chain in a transverse field at zero temperature from the point of view of a family of moment formalisms based upon the cumulant generating function, where we find exact solutions for the generating functions and cumulants at arbitrary couplings and hence for both the ordered and disordered phases of the model. In a t-expansion analysis, the exact Horn-Weinstein function E(t) has cuts along an infinite set of curves in the complex Jt-plane which are confined to the left-hand half-plane ImJt < -((1)/(4)) for the phase containing the trial state (disordered), but are not so for the other phase (ordered). For finite couplings the expansion has a finite radius of convergence. Asymptotic forms for this function exhibit a crossover at the critical point, giving the excited state gap in the ground state sector for the disordered phase, and the first excited state gap in the ordered phase. Convergence of the t-expansion with respect to truncation order is found in the disordered phase right up to the critical point, for both the ground state energy and the excited state gap. However, convergence is found in only one of the connected moments expansions (CMX), the CMX-LT, and the ground state energy shows convergence right to the criticalpoint again, although to a limited accuracy

  8. Artificial intelligence and exponential technologies business models evolution and new investment opportunities

    CERN Document Server

    Corea, Francesco

    2017-01-01

    Artificial Intelligence is a huge breakthrough technology that is changing our world. It requires some degrees of technical skills to be developed and understood, so in this book we are going to first of all define AI and categorize it with a non-technical language. We will explain how we reached this phase and what historically happened to artificial intelligence in the last century. Recent advancements in machine learning, neuroscience, and artificial intelligence technology will be addressed, and new business models introduced for and by artificial intelligence research will be analyzed. Finally, we will describe the investment landscape, through the quite comprehensive study of almost 14,000 AI companies and we will discuss important features and characteristics of both AI investors as well as investments. This is the “Internet of Thinks” era. AI is revolutionizing the world we live in. It is augmenting the human experiences, and it targets to amplify human intelligence in a future not so distant from...

  9. ARRHENIUS MODEL FOR HIGH-TEMPERATURE GLASS VISCOSITY WITH A CONSTANT PRE-EXPONENTIAL FACTOR

    International Nuclear Information System (INIS)

    Hrma, Pavel R.

    2008-01-01

    A simplified form of the Arrhenius equation, ln η = A + B(x)/T, where η is the viscosity, T the temperature, x the composition vector, and A and B the Arrhenius coefficients, was fitted to glass-viscosity data for the processing temperature range (the range at which the viscosity is within 1 to 103 Pa.s) while setting A = constant and treating B(x) as a linear function of mass fractions of major components. Fitting the Arrhenius equation to over 550 viscosity data of commercial glasses and approximately 1000 viscosity data of glasses for nuclear-waste glasses resulted in the A values of -11.35 and -11.48, respectively. The R2 value ranged from 0.92 to 0.99 for commercial glasses and was 0.98 for waste glasses. The Arrhenius models estimate viscosities for melts of commercial glasses containing 42 to 84 mass% SiO2 within the temperature range of 1100 to 1550 C and viscosity range of 5 to 400 Pa.s and for waste glasses containing 32 to 60 mass% SiO2 within the temperature range of 850 to 1450 C and viscosity range of 0.4 to 250 Pa.s

  10. An exponential distribution

    International Nuclear Information System (INIS)

    Anon

    2009-01-01

    In this presentation author deals with the probabilistic evaluation of product life on the example of the exponential distribution. The exponential distribution is special one-parametric case of the weibull distribution.

  11. Preequilibrium decay models and the quantum Green function method

    International Nuclear Information System (INIS)

    Zhivopistsev, F.A.; Rzhevskij, E.S.; Gosudarstvennyj Komitet po Ispol'zovaniyu Atomnoj Ehnergii SSSR, Moscow. Inst. Teoreticheskoj i Ehksperimental'noj Fiziki)

    1977-01-01

    The nuclear process mechanism and preequilibrium decay involving complex particles are expounded on the basis of the Green function formalism without the weak interaction assumptions. The Green function method is generalized to a general nuclear reaction: A+α → B+β+γ+...rho, where A is the target nucleus, α is a complex particle in the initial state, B is the final nucleus, and β, γ, ... rho are nuclear fragments in the final state. The relationship between the generalized Green function and Ssub(fi)-matrix is established. The resultant equations account for: 1) direct and quasi-direct processes responsible for the angular distribution asymmetry of the preequilibrium component; 2) the appearance of addends corresponding to the excitation of complex states of final nucleus; and 3) the relationship between the preequilibrium decay model and the general models of nuclear reaction theories (Lippman-Schwinger formalism). The formulation of preequilibrium emission via the S(T) matrix allows to account for all the differential terms in succession important to an investigation of the angular distribution assymetry of emitted particles

  12. General method of calculation of any hadronic decay in the 3P0 model

    International Nuclear Information System (INIS)

    Roberts, W.

    1992-01-01

    The 3 P 0 pair creation model of hadron decays is generalized to be applicable to the decay of any hadron. The wave function of the decaying hadron is expanded in terms of two clusters. The transition amplitudes is derived for any combination of angular momenta, and for general wave functions in momentum space, expanded in terms of Gaussians times polynomials. (authors)

  13. An exponential growth model with decreasing r captures bottom-up effects on the population growth of Aphis glycines Matsumura (Hemiptera: Aphididae)

    NARCIS (Netherlands)

    Costamagna, A.C.; Werf, van der W.; Bianchi, F.J.J.A.; Landis, D.A.

    2007-01-01

    1 There is ample evidence that the life history and population dynamics of aphids are closely linked to plant phenology. Based on life table studies, it has been proposed that the growth of aphid populations could be modeled with an exponential growth model, with r decreasing linearly with time.

  14. Exponential Stability of Switched Positive Homogeneous Systems

    Directory of Open Access Journals (Sweden)

    Dadong Tian

    2017-01-01

    Full Text Available This paper studies the exponential stability of switched positive nonlinear systems defined by cooperative and homogeneous vector fields. In order to capture the decay rate of such systems, we first consider the subsystems. A sufficient condition for exponential stability of subsystems with time-varying delays is derived. In particular, for the corresponding delay-free systems, we prove that this sufficient condition is also necessary. Then, we present a sufficient condition of exponential stability under minimum dwell time switching for the switched positive nonlinear systems. Some results in the previous literature are extended. Finally, a numerical example is given to demonstrate the effectiveness of the obtained results.

  15. Angular correlations in top quark decays in standard model extensions

    International Nuclear Information System (INIS)

    Batebi, S.; Etesami, S. M.; Mohammadi-Najafabadi, M.

    2011-01-01

    The CMS Collaboration at the CERN LHC has searched for the t-channel single top quark production using the spin correlation of the t-channel. The signal extraction and cross section measurement rely on the angular distribution of the charged lepton in the top quark decays, the angle between the charged lepton momentum and top spin in the top rest frame. The behavior of the angular distribution is a distinct slope for the t-channel single top (signal) while it is flat for the backgrounds. In this Brief Report, we investigate the contributions which this spin correlation may receive from a two-Higgs doublet model, a top-color assisted technicolor (TC2) and the noncommutative extension of the standard model.

  16. Decay of the standard model Higgs field after inflation

    CERN Document Server

    Figueroa, Daniel G; Torrenti, Francisco

    2015-01-01

    We study the nonperturbative dynamics of the Standard Model (SM) after inflation, in the regime where the SM is decoupled from (or weakly coupled to) the inflationary sector. We use classical lattice simulations in an expanding box in (3+1) dimensions, modeling the SM gauge interactions with both global and Abelian-Higgs analogue scenarios. We consider different post-inflationary expansion rates. During inflation, the Higgs forms a condensate, which starts oscillating soon after inflation ends. Via nonperturbative effects, the oscillations lead to a fast decay of the Higgs into the SM species, transferring most of the energy into $Z$ and $W^{\\pm}$ bosons. All species are initially excited far away from equilibrium, but their interactions lead them into a stationary stage, with exact equipartition among the different energy components. From there on the system eventually reaches equilibrium. We have characterized in detail, in the different expansion histories considered, the evolution of the Higgs and of its ...

  17. Modeling Exponential Population Growth

    Science.gov (United States)

    McCormick, Bonnie

    2009-01-01

    The concept of population growth patterns is a key component of understanding evolution by natural selection and population dynamics in ecosystems. The National Science Education Standards (NSES) include standards related to population growth in sections on biological evolution, interdependence of organisms, and science in personal and social…

  18. Ammonium removal from aqueous solutions by clinoptilolite: determination of isotherm and thermodynamic parameters and comparison of kinetics by the double exponential model and conventional kinetic models.

    Science.gov (United States)

    Tosun, Ismail

    2012-03-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  19. Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models

    Directory of Open Access Journals (Sweden)

    İsmail Tosun

    2012-03-01

    Full Text Available The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R and four three-parameter (Redlich-Peterson (R-P, Sips, Toth and Khan isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2 of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°, enthalpy (∆H° and entropy (∆S° of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  20. An Exponential Regression Model Reveals the Continuous Development of B Cell Subpopulations Used as Reference Values in Children

    Directory of Open Access Journals (Sweden)

    Christoph Königs

    2018-05-01

    Full Text Available B lymphocytes are key players in humoral immunity, expressing diverse surface immunoglobulin receptors directed against specific antigenic epitopes. The development and profile of distinct subpopulations have gained awareness in the setting of primary immunodeficiency disorders, primary or secondary autoimmunity and as therapeutic targets of specific antibodies in various diseases. The major B cell subpopulations in peripheral blood include naïve (CD19+ or CD20+IgD+CD27−, non-switched memory (CD19+ or CD20+IgD+CD27+ and switched memory B cells (CD19+ or CD20+IgD−CD27+. Furthermore, less common B cell subpopulations have also been described as having a role in the suppressive capacity of B cells to maintain self-tolerance. Data on reference values for B cell subpopulations are limited and only available for older age groups, neglecting the continuous process of human B cell development in children and adolescents. This study was designed to establish an exponential regression model to produce continuous reference values for main B cell subpopulations to reflect the dynamic maturation of the human immune system in healthy children.

  1. Interspike interval correlation in a stochastic exponential integrate-and-fire model with subthreshold and spike-triggered adaptation.

    Science.gov (United States)

    Shiau, LieJune; Schwalger, Tilo; Lindner, Benjamin

    2015-06-01

    We study the spike statistics of an adaptive exponential integrate-and-fire neuron stimulated by white Gaussian current noise. We derive analytical approximations for the coefficient of variation and the serial correlation coefficient of the interspike interval assuming that the neuron operates in the mean-driven tonic firing regime and that the stochastic input is weak. Our result for the serial correlation coefficient has the form of a geometric sequence and is confirmed by the comparison to numerical simulations. The theory predicts various patterns of interval correlations (positive or negative at lag one, monotonically decreasing or oscillating) depending on the strength of the spike-triggered and subthreshold components of the adaptation current. In particular, for pure subthreshold adaptation we find strong positive ISI correlations that are usually ascribed to positive correlations in the input current. Our results i) provide an alternative explanation for interspike-interval correlations observed in vivo, ii) may be useful in fitting point neuron models to experimental data, and iii) may be instrumental in exploring the role of adaptation currents for signal detection and signal transmission in single neurons.

  2. Foundations and models of pre-equilibrium decay

    International Nuclear Information System (INIS)

    Bunakov, V.E.

    1980-01-01

    A review is given of the presently existing microscopic, semi-phenomenologic and phenomenologic models used for the description of nuclear reactions. Their advantages and drawbacks are analyzed. A special attention is given to the analysis of pre-equilibrium decay phenomenological models based on the use of master equations (time-dependent versions of exciton models, intranuclear cascade, etc.). A version of the unified theory of nuclear reactions is discussed which makes use of quantum master equations for finite open systems. The conditions are formulated for the derivation of these equations from the time-dependent Schroedinger equation for the many-body problem. The various models of nuclear reactions used in practice are shown to be approximate solutions of master equations for finite open systems. From this point of view the analysis is carried out of these models' reliability in the description of experimental data. Possible modifications are considered which provide for better agreement between the different models and for the more exact description of experimental data. (author)

  3. Semileptonic Bc decays in the light-front quark model

    International Nuclear Information System (INIS)

    Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2010-01-01

    We investigate the exclusive semileptonic B c →(D,η c ,B,B s )lν l , η b →B c lν l (l=e,μ,τ) decays using the light-front quark model constrained by the variational principle for the QCD motivated effective Hamiltonian. The form factors f + (q 2 ) and f - (q 2 ) are obtained from the analytic continuation method in the q + =0 frame. While the form factor f + (q 2 ) is free from the zero mode, the form factor f - (q 2 ) is not free from the zero mode in the q + =0 frame. Using our effective method to relate the non-wave function vertex to the light-front valence wave function, we incorporate the zero-mode contribution as a convolution of zero-mode operator with the initial and final state wave functions.

  4. Top quark decays with flavor violation in extended models

    International Nuclear Information System (INIS)

    Aranda, J I; Gómez, D E; Ramírez-Zavaleta, F; Tututi, E S; Cortés-Maldonado, I

    2016-01-01

    We analyze the top quark decays t → cg and t → cγ mediated by a new neutral gauge boson, identified as Z', in the context of the sequential Z model. We focus our attention on the corresponding branching ratios, which are a function of the Z' boson mass. The study range is taken from 2 TeV to 6 TeV, which is compatible with the resonant region of the dileptonic channel reported by ATLAS and CMS Collaborations. Finally, our preliminary results tell us that the branching ratios of t → cg and t → cγ processes can be of the order of 10 -11 and 10 -13 , respectively. (paper)

  5. α-ternary decay of Cf isotopes, statistical model

    International Nuclear Information System (INIS)

    Joseph, Jayesh George; Santhosh, K.P.

    2017-01-01

    The process of splitting a heavier nucleus to three simultaneous fragments is termed as ternary fission and compared to usual binary fission, it is a rare process. Depending on the nature of third particle either it is called light charged particle (LCP) accompanying fission if it is light or true ternary fission if all three fragments have nearly same mass distributions. After experimental observations in early seventies, initially with a slow pace, now theoretical studies in ternary fission has turned to a hot topic in nuclear decay studies especially in past one decade. Mean while various models have been developed, existing being modified and seeking for new with a hope that it can beam a little more light to the profound nature of nuclear interaction. In this study a statistical method, level density formulation, has been employed

  6. Exponential Expansion in Evolutionary Economics

    DEFF Research Database (Denmark)

    Frederiksen, Peter; Jagtfelt, Tue

    2013-01-01

    This article attempts to solve current problems of conceptual fragmentation within the field of evolutionary economics. One of the problems, as noted by a number of observers, is that the field suffers from an assemblage of fragmented and scattered concepts (Boschma and Martin 2010). A solution...... to this problem is proposed in the form of a model of exponential expansion. The model outlines the overall structure and function of the economy as exponential expansion. The pictographic model describes four axiomatic concepts and their exponential nature. The interactive, directional, emerging and expanding...... concepts are described in detail. Taken together it provides the rudimentary aspects of an economic system within an analytical perspective. It is argued that the main dynamic processes of the evolutionary perspective can be reduced to these four concepts. The model and concepts are evaluated in the light...

  7. Estimating exponential scheduling preferences

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Börjesson, Maria; Engelson, Leonid

    2015-01-01

    of car drivers' route and mode choice under uncertain travel times. Our analysis exposes some important methodological issues related to complex non-linear scheduling models: One issue is identifying the point in time where the marginal utility of being at the destination becomes larger than the marginal......Different assumptions about travelers' scheduling preferences yield different measures of the cost of travel time variability. Only few forms of scheduling preferences provide non-trivial measures which are additive over links in transport networks where link travel times are arbitrarily...... utility of being at the origin. Another issue is that models with the exponential marginal utility formulation suffer from empirical identification problems. Though our results are not decisive, they partly support the constant-affine specification, in which the value of travel time variability...

  8. An explicit asymptotic model for the surface wave in a viscoelastic half-space based on applying Rabotnov's fractional exponential integral operators

    Science.gov (United States)

    Wilde, M. V.; Sergeeva, N. V.

    2018-05-01

    An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.

  9. Universality in stochastic exponential growth.

    Science.gov (United States)

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  10. Alpha decay and cluster decay of some neutron-rich actinide nuclei

    Indian Academy of Sciences (India)

    2017-02-09

    Feb 9, 2017 ... Abstract. Nuclei in the actinide region are good in exhibiting cluster radioactivity. In the present work, the half-lives of α-decay and heavy cluster emission from certain actinide nuclei have been calculated using cubic plus Yukawa plus exponential model (CYEM). Our model has a cubic potential for the ...

  11. Modelling of decay heat removal using large water pools

    International Nuclear Information System (INIS)

    Munther, R.; Raussi, P.; Kalli, H.

    1992-01-01

    The main task for investigating of passive safety systems typical for ALWRs (Advanced Light Water Reactors) has been reviewing decay heat removal systems. The reference system for calculations has been represented in Hitachi's SBWR-concept. The calculations for energy transfer to the suppression pool were made using two different fluid mechanics codes, namely FIDAP and PHOENICS. FIDAP is based on finite element methodology and PHOENICS uses finite differences. The reason choosing these codes has been to compare their modelling and calculating abilities. The thermal stratification behaviour and the natural circulation was modelled with several turbulent flow models. Also, energy transport to the suppression pool was calculated for laminar flow conditions. These calculations required a large amount of computer resources and so the CRAY-supercomputer of the state computing centre was used. The results of the calculations indicated that the capabilities of these codes for modelling the turbulent flow regime are limited. Output from these codes should be considered carefully, and whenever possible, experimentally determined parameters should be used as input to enhance the code reliability. (orig.). (31 refs., 21 figs., 3 tabs.)

  12. Extended Poisson Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Anum Fatima

    2015-09-01

    Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.

  13. Dynamics of exponential maps

    OpenAIRE

    Rempe, Lasse

    2003-01-01

    This thesis contains several new results about the dynamics of exponential maps $z\\mapsto \\exp(z)+\\kappa$. In particular, we prove that periodic external rays of exponential maps with nonescaping singular value always land. This is an analog of a theorem of Douady and Hubbard for polynomials. We also answer a question of Herman, Baker and Rippon by showing that the boundary of an unbounded exponential Siegel disk always contains the singular value. In addition to the presentation of new resul...

  14. Non-leptonic decays in an extended chiral quark model

    Energy Technology Data Exchange (ETDEWEB)

    Eeg, J. O. [Dept. of Physics, Univ. of Oslo, P.O. Box 1048 Blindern, N-0316 Oslo (Norway)

    2012-10-23

    We consider the color suppressed (nonfactorizable) amplitude for the decay mode B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0}. We treat the b-quark in the heavy quark limit and the energetic light (u,d,s) quarks within a variant of Large Energy Effective Theory combined with an extension of chiral quark models. Our calculated amplitude for B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0} is suppressed by a factor of order {Lambda}{sub QCD}/m{sub b} with respect to the factorized amplitude, as it should according to QCD-factorization. Further, for reasonable values of the (model dependent) gluon condensate and the constituent quark mass, the calculated nonfactorizable amplitude for B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0} can easily accomodate the experimental value. Unfortunately, the color suppressed amplitude is very sensitive to the values of these model dependent parameters. Therefore fine-tuning is necessary in order to obtain an amplitude compatible with the experimental result for B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0}.

  15. Decay modes of two repulsively interacting bosons

    International Nuclear Information System (INIS)

    Kim, Sungyun; Brand, Joachim

    2011-01-01

    We study the decay of two repulsively interacting bosons tunnelling through a delta potential barrier by a direct numerical solution of the time-dependent Schroedinger equation. The solutions are analysed according to the regions of particle presence: both particles inside the trap (in-in), one particle in and one particle out (in-out) and both particles outside (out-out). It is shown that the in-in probability is dominated by the exponential decay, and its decay rate is predicted very well from outgoing boundary conditions. Up to a certain range of interaction strength, the decay of in-out probability is dominated by the single-particle decay mode. The decay mechanisms are adequately described by simple models.

  16. Life prediction of OLED for constant-stress accelerated degradation tests using luminance decaying model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jianping, E-mail: jpzhanglzu@163.com [College of Energy and Mechanical Engineering, Shanghai University of Electric Power, Shanghai 200090 (China); Li, Wenbin [College of Energy and Mechanical Engineering, Shanghai University of Electric Power, Shanghai 200090 (China); Cheng, Guoliang; Chen, Xiao [Shanghai Tianyi Electric Co., Ltd., Shanghai 201611 (China); Wu, Helen [School of Computing, Engineering and Mathematics, University of Western Sydney, Sydney 2751 (Australia); Herman Shen, M.-H. [Department of Mechanical and Aerospace Engineering, The Ohio State University, OH 43210 (United States)

    2014-10-15

    In order to acquire the life information of organic light emitting diode (OLED), three groups of constant stress accelerated degradation tests are performed to obtain the luminance decaying data of samples under the condition that the luminance and the current are respectively selected as the indicator of performance degradation and the test stress. Weibull function is applied to describe the relationship between luminance decaying and time, least square method (LSM) is employed to calculate the shape parameter and scale parameter, and the life prediction of OLED is achieved. The numerical results indicate that the accelerated degradation test and the luminance decaying model reveal the luminance decaying law of OLED. The luminance decaying formula fits the test data very well, and the average error of fitting value compared with the test data is small. Furthermore, the accuracy of the OLED life predicted by luminance decaying model is high, which enable rapid estimation of OLED life and provide significant guidelines to help engineers make decisions in design and manufacturing strategy from the aspect of reliability life. - Highlights: • We gain luminance decaying data by accelerated degradation tests on OLED. • The luminance decaying model objectively reveals the decaying law of OLED luminance. • The least square method (LSM) is employed to calculate Weibull parameters. • The plan designed for accelerated degradation tests proves to be feasible. • The accuracy of the OLED life and the luminance decaying fitting formula is high.

  17. Life prediction of OLED for constant-stress accelerated degradation tests using luminance decaying model

    International Nuclear Information System (INIS)

    Zhang, Jianping; Li, Wenbin; Cheng, Guoliang; Chen, Xiao; Wu, Helen; Herman Shen, M.-H.

    2014-01-01

    In order to acquire the life information of organic light emitting diode (OLED), three groups of constant stress accelerated degradation tests are performed to obtain the luminance decaying data of samples under the condition that the luminance and the current are respectively selected as the indicator of performance degradation and the test stress. Weibull function is applied to describe the relationship between luminance decaying and time, least square method (LSM) is employed to calculate the shape parameter and scale parameter, and the life prediction of OLED is achieved. The numerical results indicate that the accelerated degradation test and the luminance decaying model reveal the luminance decaying law of OLED. The luminance decaying formula fits the test data very well, and the average error of fitting value compared with the test data is small. Furthermore, the accuracy of the OLED life predicted by luminance decaying model is high, which enable rapid estimation of OLED life and provide significant guidelines to help engineers make decisions in design and manufacturing strategy from the aspect of reliability life. - Highlights: • We gain luminance decaying data by accelerated degradation tests on OLED. • The luminance decaying model objectively reveals the decaying law of OLED luminance. • The least square method (LSM) is employed to calculate Weibull parameters. • The plan designed for accelerated degradation tests proves to be feasible. • The accuracy of the OLED life and the luminance decaying fitting formula is high

  18. Phenomenology of stochastic exponential growth

    Science.gov (United States)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  19. Advantage of make-to-stock strategy based on linear mixed-effect model: a comparison with regression, autoregressive, times series, and exponential smoothing models

    Directory of Open Access Journals (Sweden)

    Yu-Pin Liao

    2017-11-01

    Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.

  20. Pionic and radiative decays of vector mesons in the chiral bag model

    International Nuclear Information System (INIS)

    Araki, M.; Osaka Univ.; Council for Scientific and Industrial Research, Pretoria; Flinders Univ. of South Australia, Bedford Park. School of Physical Sciences)

    1986-01-01

    It is shown that a mechanism, within the framework of the cloudy bag model, analogous to that for e + e - ->2γ in QED accounts qualitatively for the decays p->2π, ω->πγ and p->πγ with a bag radii 0.8-1.0 fm, and averaged momenta for decay particles. For the radiative decays, the process identical to that in the vector-dominance model gives about 60% of the total calculated width. It also explains small decay widths previously calculated, using the single quark transition process. (orig.)

  1. Search for the Standard Model Higgs boson produced in the decay ...

    Indian Academy of Sciences (India)

    2012-10-06

    Oct 6, 2012 ... s = 7 TeV. No evidence is found for a significant deviation from Standard Model expectations anywhere in the ZZ mass range considered in this analysis. An upper limit at 95% CL is placed on the product of the cross-section and decay branching ratio for the Higgs boson decaying with Standard Model-like ...

  2. An exponential chemorheological model for viscosity dependence on degree-of-cure of a polyfurfuryl alcohol resin during the post-gel curing stage

    DEFF Research Database (Denmark)

    Dominguez, J.C.; Oliet, M.; Alonso, María Virginia

    2016-01-01

    of modeling the evolution of the complex viscosity using a widely used chemorheological model such as the Arrhenius model for each tested temperature, the change of the complex viscosity as a function of the degree-of-cure was predicted using a new exponential type model. In this model, the logarithm...... of the normalized degree-of-cure is used to predict the behavior of the logarithm of the normalized complex viscosity. The model shows good quality of fitting with the experimental data for 4 and 6 wt % amounts of catalyst. For the 2 wt % amount of catalyst, scattered data leads to a slightly lower quality...

  3. Optimization of the output of a solar cell per theoretical and experimental study of the models to one and two exponential

    Directory of Open Access Journals (Sweden)

    Benyoucef B.

    2012-06-01

    Full Text Available The production of electricity based on the conversion of the sunlight by photovoltaic cells containing crystalline silicon is the way most used on the technological and industrial level Consequently, the development of the terrestrial applications for the energy production requires high-output cells and low cost.The aim of our work is to present a comparative study between both theoretical and experimental models of a solar cell based silicon type PHYWE (connecting four cells in series of 80 mm diameter to improve photovoltaic performance.This study led to the determination of the parameters of the cell starting from the current-voltage characteristic, the influence of luminous flow on this characteristic as well as the effect of the incidental photons on the solar cell. We justify the interest to use the model with two exponential for the optimization of the output by underlining the insufficiency of the model to one exponential.

  4. Lepton-flavour violating $B$ decays in generic $Z'$ models

    CERN Document Server

    Crivellin, Andreas; Matias, Joaquim; Nierste, Ulrich; Pokorski, Stefan; Rosiek, Janusz

    2015-01-01

    LHCb has reported deviations from the SM expectations in $B\\to K^* \\mu^+\\mu^-$ angular observables, in $B_s\\to\\phi\\mu^+\\mu^-$ and in ratio $R(K)=Br[B\\to K \\mu^+\\mu^-]/Br[B\\to K e^+e^-]$. For all three decays, a heavy neutral gauge boson mediating $b\\to s\\mu^+\\mu^-$ transitions is a prime candidate for an explanation. As $R(K)$ measures violation of lepton-flavour universality, it is interesting to examine the possibility that also lepton flavour is violated. In this article, we investigate the perspectives to discover the lepton-flavour violating modes $B\\to K^{(*)}\\tau^\\pm\\mu^\\mp$, $B_s\\to \\tau^\\pm\\mu^\\mp$ and $B\\to K^{(*)} \\mu^\\pm e^\\mp$, $B_s\\to \\mu^\\pm e^\\mp$. For this purpose we consider a simplified model in which new-physics effects originate from an additional neutral gauge boson ($Z^\\prime$) with generic couplings to quarks and leptons. The constraints from $\\tau\\to3\\mu$, $\\tau\\to\\mu\

  5. Nucleon decay in supersymmetric models via gluino dressed graphs

    International Nuclear Information System (INIS)

    Chadha, S.; Daniel, M.; Coughlan, G.D.; Ross, G.G.

    1984-06-01

    Dimension-five baryon-number-violating operators may contribute to proton decay via gaugino exchange, which converts then to the usual dimension-six operators. The authors that gluino-exchange contributions may be expected to dominate for a large top quark mass (msub(t) >approx. 40 GeV). In this case the dominant decay modes are p → K 0 μ + , K + ν and n → ν-barK 0 , μ + π - , K 0 π - μ + . (author)

  6. Decay properties of heavy leptons in the supersymmetric model of weak and electromagnetic interactions

    International Nuclear Information System (INIS)

    Egorian, Ed.

    1979-01-01

    Decay properties of heavy leptons in the SU(2)xSU(2)xU(1) supersymmetric model of weak and electromagnetic interactions are studied. l anti νsub(e)ν leptonic and ν(νsup(c))h semihadronic decays, where l are leptons and h are hadrons, are considered. The partial and total decay rates and the production in p anti p collision of one of them are estimated for various values of its mass

  7. Exponential Potential versus Dark Matter

    Science.gov (United States)

    1993-10-15

    scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the

  8. Jellium-model calculation for monomer and dimer decays of some potassium clusters

    International Nuclear Information System (INIS)

    Saito, Susumu; Cohen, M.L.; Lawrence Berkeley Lab., CA

    1989-01-01

    We have studied several decay processes of potassium clusters and found that a dimer-decay mechanism can explain the observed lowest abundance of K 10 in the K n mass spectra. Total-energy curves for decay processes are calculated using a jellium-background model for positive-ion cores and the local-spin-density-functional approximation for valence electrons. The energy-barrier height for a dimer decay of K 10 from the energy-minimum point is found to be 0.18 eV, which is a reasonable magnitude for the decay to take place thermally in the experiment. The monomer decay of K 9 and the dimer decay of K 11 , which are expected to be the most favorable decays of K 9 and K 11 , are found to have high barriers. Monomer and dimer decays of K 8 are also studied and the monomer decay is found to be more favorable, in accord with the high-nozzle-temperature mass spectrum. (orig.)

  9. Decaying dark matter in supersymmetric SU(5) models

    International Nuclear Information System (INIS)

    Luo Mingxing; Wang Liucheng; Wu Wei; Zhu Guohuai

    2010-01-01

    Motivated by recent observations from PAMELA, Fermi and H.E.S.S., we consider dark matter decays in the framework of supersymmetric SU(5) grand unification theories. An SU(5) singlet S is assumed to be the main component of dark matters, which decays into visible particles through dimension six operators suppressed by the grand unification scale. Under certain conditions, S decays dominantly into a pair of sleptons with universal coupling for all generations. Subsequently, electrons and positrons are produced from cascade decays of these sleptons. These cascade decay chains smooth the e + +e - spectrum, which permit naturally a good fit to the Fermi-LAT data. The observed positron fraction upturn by PAMELA can be reproduced simultaneously. We have also calculated diffuse gamma-ray spectra due to the e ± excesses and compared them with the preliminary Fermi-LAT data from 0.1 GeV to 10 GeV in the region 0 deg. ≤l≤ 360 deg., 10 deg. ≤|b|≤20 deg. The photon spectrum of energy above 100 GeV, mainly from final state radiations, may be checked in the near future.

  10. Exotic Higgs decays in a neutrino mass model with discrete S3 symmetry

    CERN Document Server

    Bhattacharyya, G; Päs, H

    2010-01-01

    Exotic Higgs decays can arise in lepton flavor models with horizontal symme- tries. We investigate the scalar sector of a neutrino mass model using an S3 family symmetry as an example. The model’s symmetry leads to an enlarged scalar sector with features that might be used to test the model experimentally, such as scalar particles with masses below 1 TeV and manifestly non-zero ma- trix elements for lepton flavor violating decays. We compare different decay channels of the scalars as well as leptonic processes that violate lepton flavor, in order to compare model predictions with experimental bounds.

  11. An Unusual Exponential Graph

    Science.gov (United States)

    Syed, M. Qasim; Lovatt, Ian

    2014-01-01

    This paper is an addition to the series of papers on the exponential function begun by Albert Bartlett. In particular, we ask how the graph of the exponential function y = e[superscript -t/t] would appear if y were plotted versus ln t rather than the normal practice of plotting ln y versus t. In answering this question, we find a new way to…

  12. Exponential and Logarithmic Functions

    OpenAIRE

    Todorova, Tamara

    2010-01-01

    Exponential functions find applications in economics in relation to growth and economic dynamics. In these fields, quite often the choice variable is time and economists are trying to determine the best timing for certain economic activities to take place. An exponential function is one in which the independent variable appears in the exponent. Very often that exponent is time. In highly mathematical courses, it is a truism that students learn by doing, not by reading. Tamara Todorova’s Pr...

  13. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    Science.gov (United States)

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  14. Radiative corrections for semileptonic decays of hyperons: the 'model independent' part

    International Nuclear Information System (INIS)

    Toth, K.; Szegoe, K.; Margaritis, T.

    1984-04-01

    The 'model independent' part of the order α radiative correction due to virtual photon exchanges and inner bremsstrahlung is studied for semileptonic decays of hyperons. Numerical results of high accuracy are given for the relative correction to the branching ratio, the electron energy spectrum and the (Esub(e),Esub(f)) Dalitz distribution in the case of four different decays. (author)

  15. Model for decays of boson resonances with arbitrary spins

    International Nuclear Information System (INIS)

    Grigoryan, A.A.; Ivanov, N.Ya.

    1985-01-01

    A formula for the width of resonance with spin J decay into hadrons with arbitrary spins is derived. This width is expressed via S-channel helicity residues of Regge trajectory α J where the resonance J lies. Using the quark-gluon picture predictions for the coupling of quarks with Regge trajectories and SU(6)-classification of hadrons this formula is applied to calculate the widths of decays of resonances, which lie on the vector and tensor trajectories, into pseudoscalar and vector, two vectors and NN-bar-pair

  16. Gaugino radiative decay in an anomalous U(1)' model

    International Nuclear Information System (INIS)

    Lionetto, Andrea; Racioppi, Antonio

    2010-01-01

    We study the neutralino radiative decay into the lightest supersymmetric particle (LSP) in the framework of a minimal anomalous U(1) ' extension of the MSSM. It turns out that in a suitable decoupling limit the axino, which is present in the Stueckelberg multiplet, is the LSP. We compute the branching ratio (BR) for the decay of a neutralino into an axino and a photon. We find that in a wide region of the parameter space, the BR is higher than 93% in contrast with the typical value (≤1%) in the CMSSM.

  17. Stable exponential cosmological solutions with zero variation of G and three different Hubble-like parameters in the Einstein-Gauss-Bonnet model with a Λ-term

    Energy Technology Data Exchange (ETDEWEB)

    Ernazarov, K.K. [RUDN University, Institute of Gravitation and Cosmology, Moscow (Russian Federation); Ivashchuk, V.D. [RUDN University, Institute of Gravitation and Cosmology, Moscow (Russian Federation); VNIIMS, Center for Gravitation and Fundamental Metrology, Moscow (Russian Federation)

    2017-06-15

    We consider a D-dimensional gravitational model with a Gauss-Bonnet term and the cosmological term Λ. We restrict the metrics to diagonal cosmological ones and find for certain Λ a class of solutions with exponential time dependence of three scale factors, governed by three non-coinciding Hubble-like parameters H > 0, h{sub 1} and h{sub 2}, corresponding to factor spaces of dimensions m > 2, k{sub 1} > 1 and k{sub 2} > 1, respectively, with k{sub 1} ≠ k{sub 2} and D = 1 + m + k{sub 1} + k{sub 2}. Any of these solutions describes an exponential expansion of 3d subspace with Hubble parameter H and zero variation of the effective gravitational constant G. We prove the stability of these solutions in a class of cosmological solutions with diagonal metrics. (orig.)

  18. Stable exponential cosmological solutions with zero variation of G in the Einstein-Gauss-Bonnet model with a Λ-term

    Energy Technology Data Exchange (ETDEWEB)

    Ernazarov, K.K. [RUDN University, Institute of Gravitation and Cosmology, Moscow (Russian Federation); Ivashchuk, V.D. [RUDN University, Institute of Gravitation and Cosmology, Moscow (Russian Federation); Center for Gravitation and Fundamental Metrology, VNIIMS, Moscow (Russian Federation)

    2017-02-15

    A D-dimensional gravitational model with a Gauss-Bonnet term and the cosmological term Λ is considered. By assuming diagonal cosmological metrics, we find, for a certain fine-tuned Λ, a class of solutions with exponential time dependence of two scale factors, governed by two Hubble-like parameters H > 0 and h < 0, corresponding to factor spaces of dimensions m > 3 and l > 1, respectively, with (m,l) ≠ (6,6), (7,4), (9,3) and D = 1+m+l. Any of these solutions describes an exponential expansion of three-dimensional subspace with Hubble parameter H and zero variation of the effective gravitational constant G. We prove the stability of these solutions in a class of cosmological solutions with diagonal metrics. (orig.)

  19. Estimation of the systemic burden of plutonium from urinary excretion data and a multi-exponential model for excretion in comparison with autopsy data

    International Nuclear Information System (INIS)

    Bernard, S.R.; Nestor, C.W.

    1985-01-01

    The authors have adapted other's method for computing the systemic burden from urinary excretion data to use a multi-exponential model (2) for excretion, rather than Langham's power function. The mathematical basis of Synder's method is the representation of the systemic burden as the convolution integral of the observed urinary excretion data with the inverse Laplace transform of the excretion function; in the case of urinary excretion of plutonium, the power function has a Laplace transform, but for other elements (notably uranium) it does not. If the method is to be used for other radioisotopes, the excretion function must have a Laplace transform, and for this reason we have used a multi-exponential form of the excretion function. They have written a computer program to calculate estimates of the systemic burden and the integrated intake from urinary excretion data, and have compared the results with two cases for which autopsy data are available, as presented in this paper

  20. The constraints on the OGTM model and MTC model from flavor-changing Z decay

    International Nuclear Information System (INIS)

    Wang Xuelei; Yang Hua; Lu Gongru

    1996-01-01

    The one-loop contributions of pseudo Goldstone bosons to the flavor-changing decay Z→bs-bar (b-bars) in the one generation technicolor models (OGTM) and the technicolor model with a massless scalar doublet are calculated. We find that the contributions can strongly enhance the branching ratio B(Z→bs-bar + b-bars). So, a more stringent limit on the parameters h, λ difference can be obtained

  1. Weak decays

    International Nuclear Information System (INIS)

    Wojcicki, S.

    1978-11-01

    Lectures are given on weak decays from a phenomenological point of view, emphasizing new results and ideas and the relation of recent results to the new standard theoretical model. The general framework within which the weak decay is viewed and relevant fundamental questions, weak decays of noncharmed hadrons, decays of muons and the tau, and the decays of charmed particles are covered. Limitation is made to the discussion of those topics that either have received recent experimental attention or are relevant to the new physics. (JFP) 178 references

  2. Charmonium decays into proton-antiproton and a quark-diquark model for the nucleon

    International Nuclear Information System (INIS)

    Anselmino, M.; Forte, S

    1990-01-01

    A quark-diquark model of the nucleon is applied to a perturbative QCD description of several decays of the charmonium family: η sub(c), χ sub(c0,c1,c2), → p sup(-)p. Both experimental data and theoretical considerations are used to fix the parameters of the model. Decay rates for the χ's in good agreement with the existing experimental results may be obtained. The values for the decay of the η sub(c) are found instead to be much smaller than the data. Our formalism provides a general framework for the computation of the decay amplitudes of any sup(25+1)L sub(j), C = +1, heavy quarkonium state into hadron-antihadron. The explicit expression for the decay into two photons is also given. (author)

  3. A generalized voter model with time-decaying memory on a multilayer network

    Science.gov (United States)

    Zhong, Li-Xin; Xu, Wen-Juan; Chen, Rong-Da; Zhong, Chen-Yang; Qiu, Tian; Shi, Yong-Dong; Wang, Li-Liang

    2016-09-01

    By incorporating a multilayer network and time-decaying memory into the original voter model, we investigate the coupled effects of spatial and temporal accumulation of peer pressure on the consensus. Heterogeneity in peer pressure and the time-decaying mechanism are both shown to be detrimental to the consensus. We find the transition points below which a consensus can always be reached and above which two opposed opinions are more likely to coexist. Our mean-field analysis indicates that the phase transitions in the present model are governed by the cumulative influence of peer pressure and the updating threshold. We find a functional relation between the consensus threshold and the decay rate of the influence of peer is found. As to the pressure. The time required to reach a consensus is governed by the coupling of the memory length and the decay rate. An intermediate decay rate may greatly reduce the time required to reach a consensus.

  4. Method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  5. The decay width of stringy hadrons

    Science.gov (United States)

    Sonnenschein, Jacob; Weissman, Dorin

    2018-02-01

    In this paper we further develop a string model of hadrons by computing their strong decay widths and comparing them to experiment. The main decay mechanism is that of a string splitting into two strings. The corresponding total decay width behaves as Γ = π/2 ATL where T and L are the tension and length of the string and A is a dimensionless universal constant. We show that this result holds for a bosonic string not only in the critical dimension. The partial width of a given decay mode is given by Γi / Γ =Φi exp ⁡ (- 2 πCmsep2 / T) where Φi is a phase space factor, msep is the mass of the "quark" and "antiquark" created at the splitting point, and C is a dimensionless coefficient close to unity. Based on the spectra of hadrons we observe that their (modified) Regge trajectories are characterized by a negative intercept. This implies a repulsive Casimir force that gives the string a "zero point length". We fit the theoretical decay width to experimental data for mesons on the trajectories of ρ, ω, π, η, K*, ϕ, D, and Ds*, and of the baryons N, Δ, Λ, and Σ. We examine both the linearity in L and the exponential suppression factor. The linearity was found to agree with the data well for mesons but less for baryons. The extracted coefficient for mesons A = 0.095 ± 0.015 is indeed quite universal. The exponential suppression was applied to both strong and radiative decays. We discuss the relation with string fragmentation and jet formation. We extract the quark-diquark structure of baryons from their decays. A stringy mechanism for Zweig suppressed decays of quarkonia is proposed and is shown to reproduce the decay width of ϒ states. The dependence of the width on spin and flavor symmetry is discussed. We further apply this model to the decays of glueballs and exotic hadrons.

  6. Fast quantum modular exponentiation

    International Nuclear Information System (INIS)

    Meter, Rodney van; Itoh, Kohei M.

    2005-01-01

    We present a detailed analysis of the impact on quantum modular exponentiation of architectural features and possible concurrent gate execution. Various arithmetic algorithms are evaluated for execution time, potential concurrency, and space trade-offs. We find that to exponentiate an n-bit number, for storage space 100n (20 times the minimum 5n), we can execute modular exponentiation 200-700 times faster than optimized versions of the basic algorithms, depending on architecture, for n=128. Addition on a neighbor-only architecture is limited to O(n) time, whereas non-neighbor architectures can reach O(log n), demonstrating that physical characteristics of a computing device have an important impact on both real-world running time and asymptotic behavior. Our results will help guide experimental implementations of quantum algorithms and devices

  7. Higgs-boson and Z-boson flavor-changing neutral-current decays correlated with B-meson decays in the littlest Higgs model with T parity

    International Nuclear Information System (INIS)

    Han Xiaofang; Wang Lei; Yang Jinmin

    2008-01-01

    In the littlest Higgs model with T-parity new flavor-changing interactions between mirror fermions and the standard model (SM) fermions can induce various flavor-changing neutral-current decays for B-mesons, the Z-boson, and the Higgs boson. Since all these decays induced in the littlest Higgs with T-parity model are correlated, in this work we perform a collective study for these decays, namely, the Z-boson decay Z→bs, the Higgs-boson decay h→bs, and the B-meson decays B→X s γ, B s →μ + μ - , and B→X s μ + μ - . We find that under the current experimental constraints from the B-decays, the branching ratios of both Z→bs and h→bs can still deviate from the SM predictions significantly. In the parameter space allowed by the B-decays, the branching ratio of Z→bs can be enhanced up to 10 -7 (about one order above the SM prediction) while h→bs can be much suppressed relative to the SM prediction (about one order below the SM prediction).

  8. Penguin effects induced by the two-Higgs-doublet model and charmless B-meson decays

    International Nuclear Information System (INIS)

    Davies, A.J.; Joshi, G.C.; Matsuda, M.

    1991-01-01

    Nonstandard physical effects through the penguin diagram induced by the charged Higgs scalar contribution in the two-Higgs-doublet model are analysed. Since non-leptonic B-decay processes to final states consisting of s+s+anti s are induced only through the penguin diagram they are important tests of such contributions. We compare these decays including the non-standard two-Higgs-doublet contribution with the standard model results, which arise from the magnetic gluon transistion term. The charged Higgs contribution can give a sizable enhancement to the branching fraction of B-meson charmless decay. (orig.)

  9. Nuclear fragmentation with secondary decay in the context of conventional percolation model

    International Nuclear Information System (INIS)

    Santiago, A.J.

    1989-09-01

    Mass and energy spectra arising from proton-nucleus collisions at energies between 80 and 350 GeV were studied, using the conventional percolation model coupled with secondary decay of the clusters. (L.C.J.A.)

  10. Decay constants of heavy mesons in the relativistic potential model with velocity dependent corrections

    International Nuclear Information System (INIS)

    Avaliani, I.S.; Sisakyan, A.N.; Slepchenko, L.A.

    1992-01-01

    In the relativistic model with the velocity dependent potential the masses and leptonic decay constants of heavy pseudoscalar and vector mesons are computed. The possibility of using this potential is discussed. 11 refs.; 4 tabs

  11. Calculations of Inflaton Decays and Reheating: with Applications to No-Scale Inflation Models

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, $w$, during the epoch of inflaton decay, the reheating temperature, $T_{\\rm reh}$, and the number of inflationary e-folds, $N_*$, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index $n_s$ and the tensor-to-scalar perturbation ratio $r$, converting them into constraints on $N_*$, the inflaton decay rate and other parameters of specific no-scale inflationary models.

  12. B decays, flavour mixings and CP violation in the Standard Model

    International Nuclear Information System (INIS)

    Ali, A.

    1996-06-01

    These lectures review the progress made in our present understanding of B decays. The emphasis here is on applications of QCD to B decays and the attendant perturbative and non-perturbative uncertainties, which limit present theoretical precision in some cases but the overall picture that emerges is consistent with the standard model (SM). This is illustrated by quantitatively analyzing some of the key measurements in B physics. These lectures are divided in five parts. In the first part, the Kobayashi-Maskawa generalization of the Cabibbo-GIM matrix for quark flavour mixing is discussed. In the second part, the bulk properties of B decays, such as the inclusive decay rates, semileptonic branching ratios. B-hadron lifetimes, and the so-called charm counting in B decays are taken up. The third part is devoted to theoretical studies of rare B decays, in particular the electromagnetic penguins involving and the branching ratios in the SM are discussed and compared with data, enabling a determination of the CKM matrix element vertical stroke V ts vertical stroke, the b-quark mass, and the kinetic energy of the b-quark in the B meson. The CKM-suppressed inclusive decay B→X d +γ, and the exclusive decays B→(ρ,ω)+γ, are discussed in the SM using QCD sum rules for the latter

  13. Radiative decay of mesons in an independent-quark potential model

    International Nuclear Information System (INIS)

    Barik, N.; Dash, P.C.; Panda, A.R.

    1992-01-01

    We investigate in a potential model of independent quarks the M1 transitions among the low-lying vector (V) and pseudoscalar (P) mesons. We perform a ''static'' calculation of the partial decay widths of twelve possible M1 transitions such as V→Pγ and P→Vγ within the traditional picture of photon emission by a confined quark and/or antiquark. The model accounts well for the observed decay widths

  14. Penguin effects induced by the two-Higgs-doublet model and charmless B-meson decays

    International Nuclear Information System (INIS)

    Davies, A.J.; Joshi, G.C.; Matsuda, M.

    1991-03-01

    Nonstandard physical effects through the penguin diagram induced by the charged Higgs scalar contribution in the two-Higgs-doublet model are analysed. The non-leptonic β-decay processes including the non-standard two-Higgs-doublet contribution are compared with the standard model results, which arise from the magnetic gluon transition term. The charged Higgs contribution gives a sizable enhancement to the branching fractions of β-meson charmless decay. 13 refs., 4 figs

  15. Radiative Corrections for Wto e barν Decay in the Weinberg-Salam Model

    Science.gov (United States)

    Inoue, K.; Kakuto, A.; Komatsu, H.; Takeshita, S.

    1980-09-01

    The one-loop corrections for the Wto e barν decay rate are calculated in the Weinberg-Salam model with arbitrary number of generations. The on-shell renormalization prescription and the 't Hooft-Feynman gauge are employed. Divergences are treated by the dimensional regularization method. Some numerical estimates for the decay rate are given in the three-generation model. It is found that there are significant corrections mainly owing to fermion-mass singularities.

  16. Weak leptonic decay of light and heavy pseudoscalar mesons in an independent quark model

    International Nuclear Information System (INIS)

    Barik, N.; Dash, P.C.

    1993-01-01

    Weak leptonic decays of light and heavy pseudoscalar mesons are studied in a field-theoretic framework based on the independent quark model with a scalar-vector harmonic potential. Defining the quark-antiquark momentum distribution amplitude obtainable from the bound quark eigenmodes of the model with the assumption of a strong correlation between quark-antiquark momenta inside the decaying meson in its rest frame, we derive the partial decay width with correct kinematical factors from which we extract an expression for the pseudoscalar decay constants f M . Using the model parameters determined from earlier studies in the light-flavor sector and heavy-quark masses m c and m b from the hyperfine splitting of (D * ,D) and (B * ,B), we calculate the pseudoscalar decay constants. We find that while (f π ,f K )≡(138,157 MeV); (f D ,f Ds )≡(161,205 MeV), (f B ,f Bs )≡(122,154 MeV), and f Bc =221 MeV. We also obtain the partial decay widths and branching ratios for some kinematically allowed weak leptonic decay processes

  17. An experimental investigation on the effects of exponential window and impact force level on harmonic reduction in impact-synchronous model analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin [Mechanical Engineering Department, Faculty of EngineeringUniversity of Malaya, Kuala Lumpur (Malaysia); Rahman, Abdul Ghaffar Abdul [Faculty of Mechanical Engineering, University Malaysia Pahang, Pekan (Malaysia); Ismail, Zubaidah [Civil Engineering Department, Faculty of Engineering, University of Malaya, Kuala Lumpur (Malaysia)

    2016-08-15

    A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)

  18. An experimental investigation on the effects of exponential window and impact force level on harmonic reduction in impact-synchronous model analysis

    International Nuclear Information System (INIS)

    Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin; Rahman, Abdul Ghaffar Abdul; Ismail, Zubaidah

    2016-01-01

    A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)

  19. Electroweak penguin decays as probes of physics beyond the Standard Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Electroweak penguin decays are sensitive to new, virtual particles and therefore offer a unique window on any physics beyond the Standard Model. In the B sector, penguin decays such as B0->K*0mu+mu- give a number of measurable quantities which can be precisely predicted by theory. The LHCb experiment has made the world's most precise measurements of this and several other related decays. These measurements give constraints on any new physics phenomena contributing to the relevant loop processes at mass scales well in excess of those that can be accessed by direct searches. The recent experimental progress of such measurements will be presented.

  20. Radiative decays of eta-eta'-mesons in quark nonlocal model

    International Nuclear Information System (INIS)

    Efimov, G.V.; Ivanov, M.A.; Nogovitsyn, E.A.

    1980-01-01

    Leading radiative decays of eta, eta'-mesons (P→γγ, P=π 0 ,eta,eta', eta→π + π - γ, eta→π 0 γγ, eta'→Vsub(γ)(V=rho 0 , ω)) are decribed within a quark nonlocal model. Decay widths and electromagnetic form factors for the P→γl + l - decay are calculated. Calculations are performed for two mixing angles (THETA=-11 deg and THETA=-18 deg). For the case when THETA=-11 deg good agreement with experiment is achieved

  1. Rare Bs→γνν- Decay in Family Nonuniversal Z′ Model

    International Nuclear Information System (INIS)

    Şirvanlı, Berin Belma

    2015-01-01

    The rare B s →γνν - decay with polarized photon is studied in the framework of a family nonuniversal Z ′ model. The branching ratio and photon polarization asymmetry to the model parameters are calculated and compared with the Standard Model. Deviations from the Standard Model will indicate the presence of new physics

  2. Effect of including decay chains on predictions of equilibrium-type terrestrial food chain models

    International Nuclear Information System (INIS)

    Kirchner, G.

    1990-01-01

    Equilibrium-type food chain models are commonly used for assessing the radiological impact to man from environmental releases of radionuclides. Usually these do not take into account build-up of radioactive decay products during environmental transport. This may be a potential source of underprediction. For estimating consequences of this simplification, the equations of an internationally recognised terrestrial food chain model have been extended to include decay chains of variable length. Example calculations show that for releases from light water reactors as expected both during routine operation and in the case of severe accidents, the build-up of decay products during environmental transport is generally of minor importance. However, a considerable number of radionuclides of potential radiological significance have been identified which show marked contributions of decay products to calculated contamination of human food and resulting radiation dose rates. (author)

  3. Unbinned model-independent measurements with coherent admixtures of multibody neutral D meson decays

    Energy Technology Data Exchange (ETDEWEB)

    Poluektov, Anton [University of Warwick, Department of Physics, Coventry (United Kingdom)

    2018-02-15

    Various studies of Standard Model parameters involve measuring the properties of a coherent admixture of D{sup 0} and D{sup 0} states. A typical example is the determination of the Unitarity Triangle angle γ in the decays B → DK, D → K{sup 0}{sub S}π{sup +}π{sup -}. A model-independent approach to perform this measurement is proposed that has superior statistical sensitivity than the well-established method involving binning of the D → K{sup 0}{sub S}π{sup +}π{sup -} decay phase space. The technique employs Fourier analysis of the complex phase difference between D{sup 0} and D{sup 0} decay amplitudes and can easily be generalised to other similar measurements, such as studies of charm mixing or determination of the angle β from B{sup 0} → Dh{sup 0} decays. (orig.)

  4. Semileptonic decays of Λ{sub c} baryons in the relativistic quark model

    Energy Technology Data Exchange (ETDEWEB)

    Faustov, R.N.; Galkin, V.O. [Institute of Informatics in Education, FRC CSC RAS, Moscow (Russian Federation)

    2016-11-15

    Motivated by recent experimental progress in studying weak decays of the Λ{sub c} baryon we investigate its semileptonic decays in the framework of the relativistic quark model based on the quasipotential approach with the QCD-motivated potential. The form factors of the Λ{sub c} → Λlν{sub l} and Λ{sub c} → nlν{sub l} decays are calculated in the whole accessible kinematical region without extrapolations and additional model assumptions. Relativistic effects are systematically taken into account including transformations of baryon wave functions from the rest to moving reference frame and contributions of the intermediate negative-energy states. Baryon wave functions found in the previous mass spectrum calculations are used for the numerical evaluation. Comprehensive predictions for decay rates, asymmetries and polarization parameters are given. They agree well with available experimental data. (orig.)

  5. Unbinned model-independent measurements with coherent admixtures of multibody neutral D meson decays

    Science.gov (United States)

    Poluektov, Anton

    2018-02-01

    Various studies of Standard Model parameters involve measuring the properties of a coherent admixture of {D} ^0 and {\\overline{D}^0 states. A typical example is the determination of the Unitarity Triangle angle γ in the decays B→ DK, D→ {K^0_S} π^+ π^-. A model-independent approach to perform this measurement is proposed that has superior statistical sensitivity than the well-established method involving binning of the D→ {K^0_S} π^+ π^- decay phase space. The technique employs Fourier analysis of the complex phase difference between {D} ^0 and {\\overline{D}^0 decay amplitudes and can easily be generalised to other similar measurements, such as studies of charm mixing or determination of the angle β from {{B} ^0} → D h^0 decays.

  6. From near-surface to root-zone soil moisture using an exponential filter: an assessment of the method based on in-situ observations and model simulations

    Directory of Open Access Journals (Sweden)

    C. Albergel

    2008-12-01

    Full Text Available A long term data acquisition effort of profile soil moisture is under way in southwestern France at 13 automated weather stations. This ground network was developed in order to validate remote sensing and model soil moisture estimates. In this paper, both those in situ observations and a synthetic data set covering continental France are used to test a simple method to retrieve root zone soil moisture from a time series of surface soil moisture information. A recursive exponential filter equation using a time constant, T, is used to compute a soil water index. The Nash and Sutcliff coefficient is used as a criterion to optimise the T parameter for each ground station and for each model pixel of the synthetic data set. In general, the soil water indices derived from the surface soil moisture observations and simulations agree well with the reference root-zone soil moisture. Overall, the results show the potential of the exponential filter equation and of its recursive formulation to derive a soil water index from surface soil moisture estimates. This paper further investigates the correlation of the time scale parameter T with soil properties and climate conditions. While no significant relationship could be determined between T and the main soil properties (clay and sand fractions, bulk density and organic matter content, the modelled spatial variability and the observed inter-annual variability of T suggest that a weak climate effect may exist.

  7. Ratios of Vector and Pseudoscalar B Meson Decay Constants in the Light-Cone Quark Model

    Science.gov (United States)

    Dhiman, Nisha; Dahiya, Harleen

    2018-05-01

    We study the decay constants of pseudoscalar and vector B meson in the framework of light-cone quark model. We apply the variational method to the relativistic Hamiltonian with the Gaussian-type trial wave function to obtain the values of β (scale parameter). Then with the help of known values of constituent quark masses, we obtain the numerical results for the decay constants f_P and f_V, respectively. We compare our numerical results with the existing experimental data.

  8. Lattice Boltzmann model for three-dimensional decaying homogeneous isotropic turbulence

    International Nuclear Information System (INIS)

    Xu Hui; Tao Wenquan; Zhang Yan

    2009-01-01

    We implement a lattice Boltzmann method (LBM) for decaying homogeneous isotropic turbulence based on an analogous Galerkin filter and focus on the fundamental statistical isotropic property. This regularized method is constructed based on orthogonal Hermite polynomial space. For decaying homogeneous isotropic turbulence, this regularized method can simulate the isotropic property very well. Numerical studies demonstrate that the novel regularized LBM is a promising approximation of turbulent fluid flows, which paves the way for coupling various turbulent models with LBM

  9. Decay properties of certain odd-Z SHE

    International Nuclear Information System (INIS)

    Carmel Vigila Bai, G.M.; Santhosh Kumar, S.

    2004-01-01

    In this work the well known Cubic plus Yukawa plus Exponential model (CYEM) in two sphere approximation and incorporating deformation effects to parents and daughter was used to study the alpha decay properties of certain odd-Z super heavy elements

  10. Distributed decay kinetics of charge separated state in solid film

    NARCIS (Netherlands)

    Lehtivuori, Heli; Efimov, Alexander; Lemmetyinen, Helge; Tkachenko, Nikolai V.

    2007-01-01

    Photoinduced electron transfer in solid films of porphyrin-fullerene dyads was studied using femtosecond pump-probe method. The relaxation of the main photo-product, intramolecular exciplex, was found to be essentially non-exponential. To analyze the decays a model accounting for a distribution of

  11. Including Effects of Water Stress on Dead Organic Matter Decay to a Forest Carbon Model

    Science.gov (United States)

    Kim, H.; Lee, J.; Han, S. H.; Kim, S.; Son, Y.

    2017-12-01

    Decay of dead organic matter is a key process of carbon (C) cycling in forest ecosystems. The change in decay rate depends on temperature sensitivity and moisture conditions. The Forest Biomass and Dead organic matter Carbon (FBDC) model includes a decay sub-model considering temperature sensitivity, yet does not consider moisture conditions as drivers of the decay rate change. This study aimed to improve the FBDC model by including a water stress function to the decay sub-model. Also, soil C sequestration under climate change with the FBDC model including the water stress function was simulated. The water stress functions were determined with data from decomposition study on Quercus variabilis forests and Pinus densiflora forests of Korea, and adjustment parameters of the functions were determined for both species. The water stress functions were based on the ratio of precipitation to potential evapotranspiration. Including the water stress function increased the explained variances of the decay rate by 19% for the Q. variabilis forests and 7% for the P. densiflora forests, respectively. The increase of the explained variances resulted from large difference in temperature range and precipitation range across the decomposition study plots. During the period of experiment, the mean annual temperature range was less than 3°C, while the annual precipitation ranged from 720mm to 1466mm. Application of the water stress functions to the FBDC model constrained increasing trend of temperature sensitivity under climate change, and thus increased the model-estimated soil C sequestration (Mg C ha-1) by 6.6 for the Q. variabilis forests and by 3.1 for the P. densiflora forests, respectively. The addition of water stress functions increased reliability of the decay rate estimation and could contribute to reducing the bias in estimating soil C sequestration under varying moisture condition. Acknowledgement: This study was supported by Korea Forest Service (2017044B10-1719-BB01)

  12. It's a dark, dark world: background evolution of interacting φCDM models beyond simple exponential potentials

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Suprit; Singh, Parminder, E-mail: ssingh2@physics.du.ac.in, E-mail: psingh@physics.du.ac.in [Department of Physics and Astrophysics, University of Delhi, University Road, University Enclave, New Delhi 110 007 (India)

    2016-05-01

    We study the background cosmological dynamics with a three component source content: a radiation fluid, a barotropic fluid to mimic the matter sector and a single scalar field which can act as dark energy giving rise to the late-time accelerated phase. Using the well-known dimensionless variables, we cast the dynamical equations into an autonomous system of ordinary differential equations (ASODE), which are studied by computing the fixed points and the conditions for their stability. The matter fluid and the scalar field are taken to be uncoupled at first and later, we consider a coupling between the two of the form Q = √(2/3)κβρ{sub m} φ̇ where ρ{sub m} is the barotropic fluid density. The key point of our analysis is that for the closure of ASODE, we only demand that the jerk, Γ = V V ''/ V '{sup 2} is a function of acceleration, z = −M{sub p}V'/V, that is, Γ = 1+ f ( z ). In this way, we are able to accommodate a large class of potentials that goes beyond the simple exponential potentials. The analysis is completely generic and independent of the form of the potential for the scalar field. As an illustration and confirmation of the analysis, we consider f ( z ) of the forms μ/ z {sup 2}, μ/ z , (μ− z )/ z {sup 2} and (μ− z ) to numerically compute the evolution of cosmological parameters with and without coupling. Implications of the approach and the results are discussed.

  13. Beyond Standard Model searches in B decays with ATLAS

    CERN Document Server

    Turchikhin, Semen; The ATLAS collaboration

    2018-01-01

    The proceeding contribution presents recent results of the ATLAS experiment at the LHC on heavy flavour measurements sensitive to possible contributions of the new physics. Two measurements are overviewed: the angular analysis of $B^0\\to\\mu^+\\mu^- K^{*0}$ decay and measurement of relative width difference of the $B^0$-$\\bar{B}^0$ system. The first one uses a data sample with an integrated luminosity of 20.3 fb$^{-1}$ collected by ATLAS at a centre of mass energy $\\sqrt{s} = 8$ TeV, and the second one benefits from the full ATLAS Run-1 dataset with additional 4.9 fb$^{-1}$ collected at $\\sqrt{s} = 7$ TeV.

  14. Using Exponential Random Graph Models to Analyze the Character of Peer Relationship Networks and Their Effects on the Subjective Well-being of Adolescents.

    Science.gov (United States)

    Jiao, Can; Wang, Ting; Liu, Jianxin; Wu, Huanjie; Cui, Fang; Peng, Xiaozhe

    2017-01-01

    The influences of peer relationships on adolescent subjective well-being were investigated within the framework of social network analysis, using exponential random graph models as a methodological tool. The participants in the study were 1,279 students (678 boys and 601 girls) from nine junior middle schools in Shenzhen, China. The initial stage of the research used a peer nomination questionnaire and a subjective well-being scale (used in previous studies) to collect data on the peer relationship networks and the subjective well-being of the students. Exponential random graph models were then used to explore the relationships between students with the aim of clarifying the character of the peer relationship networks and the influence of peer relationships on subjective well being. The results showed that all the adolescent peer relationship networks in our investigation had positive reciprocal effects, positive transitivity effects and negative expansiveness effects. However, none of the relationship networks had obvious receiver effects or leaders. The adolescents in partial peer relationship networks presented similar levels of subjective well-being on three dimensions (satisfaction with life, positive affects and negative affects) though not all network friends presented these similarities. The study shows that peer networks can affect an individual's subjective well-being. However, whether similarities among adolescents are the result of social influences or social choices needs further exploration, including longitudinal studies that investigate the potential processes of subjective well-being similarities among adolescents.

  15. Rank-shaping regularization of exponential spectral analysis for application to functional parametric mapping

    International Nuclear Information System (INIS)

    Turkheimer, Federico E; Hinz, Rainer; Gunn, Roger N; Aston, John A D; Gunn, Steve R; Cunningham, Vincent J

    2003-01-01

    Compartmental models are widely used for the mathematical modelling of dynamic studies acquired with positron emission tomography (PET). The numerical problem involves the estimation of a sum of decaying real exponentials convolved with an input function. In exponential spectral analysis (SA), the nonlinear estimation of the exponential functions is replaced by the linear estimation of the coefficients of a predefined set of exponential basis functions. This set-up guarantees fast estimation and attainment of the global optimum. SA, however, is hampered by high sensitivity to noise and, because of the positivity constraints implemented in the algorithm, cannot be extended to reference region modelling. In this paper, SA limitations are addressed by a new rank-shaping (RS) estimator that defines an appropriate regularization over an unconstrained least-squares solution obtained through singular value decomposition of the exponential base. Shrinkage parameters are conditioned on the expected signal-to-noise ratio. Through application to simulated and real datasets, it is shown that RS ameliorates and extends SA properties in the case of the production of functional parametric maps from PET studies

  16. DecAID: a decaying wood advisory model for Oregon and Washington.

    Science.gov (United States)

    Kim Mellen; Bruce G. Marcot; Janet L. Ohmann; Karen L. Waddell; Elizabeth A. Willhite; Bruce B. Hostetler; Susan A. Livingston; Cay. Ogden

    2002-01-01

    DecAID is a knowledge-based advisory model that provides guidance to managers in determining the size, amount, and distribution of dead and decaying wood (dead and partially dead trees and down wood) necessary to maintain wildlife habitat and ecosystem functions. The intent of the model is to update and replace existing snag-wildlife models in Washington and Oregon....

  17. J/ψ→γB anti B decays and the quark-pair creation model

    International Nuclear Information System (INIS)

    Ping Ronggang; Jiang Huanqing; Shen Pengnian; Zou Bingsong

    2002-01-01

    The authors generalize the quark-pair creation model to a study of the radiative decays J/ψ→γB anti B by assuming that the u, d or s quark pairs are created with the same interaction strength. From the calculation of the ratio of the decay widths Γ(J/ψ→γp anti B)/Γ(J/ψ→p anti p), the authors extract the quark-pair creation strength gI=15.40 GeV. Based on the SU(6) spin-flavour basis and the 'uds' basis, the radiative decay branching ratios containing strange baryons are evaluated. Measurements for these decay widths from the BESII data are suggested

  18. J/psi-> gamma B anti B decays and the quark-pair creation model

    CERN Document Server

    Ping Rong Gang; Shen Peng Nian; Zou Bing Song

    2002-01-01

    The authors generalize the quark-pair creation model to a study of the radiative decays J/psi-> gamma B anti B by assuming that the u, d or s quark pairs are created with the same interaction strength. From the calculation of the ratio of the decay widths GAMMA(J/psi-> gamma p anti B)/GAMMA(J/psi->p anti p), the authors extract the quark-pair creation strength gI=15.40 GeV. Based on the SU(6) spin-flavour basis and the 'uds' basis, the radiative decay branching ratios containing strange baryons are evaluated. Measurements for these decay widths from the BESII data are suggested

  19. TESTING MODELS FOR THE SHALLOW DECAY PHASE OF GAMMA-RAY BURST AFTERGLOWS WITH POLARIZATION OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Lan, Mi-Xiang; Dai, Zi-Gao [School of Astronomy and Space Science, Nanjing University, Nanjing 210093 (China); Wu, Xue-Feng, E-mail: dzg@nju.edu.cn [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008 (China)

    2016-08-01

    The X-ray afterglows of almost one-half of gamma-ray bursts have been discovered by the Swift satellite to have a shallow decay phase of which the origin remains mysterious. Two main models have been proposed to explain this phase: relativistic wind bubbles (RWBs) and structured ejecta, which could originate from millisecond magnetars and rapidly rotating black holes, respectively. Based on these models, we investigate polarization evolution in the shallow decay phase of X-ray and optical afterglows. We find that in the RWB model, a significant bump of the polarization degree evolution curve appears during the shallow decay phase of both optical and X-ray afterglows, while the polarization position angle abruptly changes its direction by 90°. In the structured ejecta model, however, the polarization degree does not evolve significantly during the shallow decay phase of afterglows whether the magnetic field configuration in the ejecta is random or globally large-scale. Therefore, we conclude that these two models for the shallow decay phase and relevant central engines would be testable with future polarization observations.

  20. Flavor-changing Z decay in the one generation technicolor models

    International Nuclear Information System (INIS)

    Wang, X.; Lu, G.; Xiao, Z.

    1995-01-01

    The flavor-changing decay Z→b bar s(bar bs) induced through pseudo Goldstone bosons (PGB's) is calculated in two kinds of one generation technicolor models (OGTM's). (a) For model I, we find that an interesting branching ratio B(Z→b bar s+bar bs)∼10 -6 can be obtained for particular choices of the parameters; such a magnitude of the order of the branching ratio is at the border of being detectable. (b) For model II, the PGB contributions can strongly enhance the branching ratio B(Z→b bar s+bar bs). With the current experimental limit on the branching ratios of rare Z decay, the model-dependent bounds on the mass of the color octet pseudo Goldstone bosons can be derived. As will be seen, the decay Z→b bar s(bar bs) may provide a unique window to study the TC theory

  1. Radiative decay of the eta-, eta'-mesons in the nonlocal quark model

    International Nuclear Information System (INIS)

    Efimov, G.V.; Ivanov, M.A.; Nogovitsyn, E.A.

    1981-01-01

    P→γγ (P=π 0 , eta, eta'), eta→π + π - γ, eta→π 0 γγ, eta 1 →Vγ (V=rho 0 , ω), p→γl + l - (p=π 0 , eta, eta') radiation decays are studied for testing the applicability of the non-local quark model for description of the experimental data. The Feynman diagrams of these decays are presented, values of the widths of the Veta→γγ, eta→π + π - γ, eta→π 0 γγ, eta'→γγ, eta'→rho 0 γ, eta'→ωγ decays are calculated and given in the form of a table. Calculations are carried out for two values of the eta eta'-crossing angle: THETA=-11 deg and -18 deg. Values of invariant amplitudes of these decays are determined for π 0 →γe + e - , eta→γμ + μ - , eta'→γμ + μ - decays at THETA=-11 deg and -18 deg. The best agreement with the experimental data is noted to take place at THETA=-11 deg, the determined width of the eta→π 0 γγ decays is underestimated as compared with the experimental one [ru

  2. BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)

    Science.gov (United States)

    We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...

  3. Assessment of early treatment response to neoadjuvant chemotherapy in breast cancer using non-mono-exponential diffusion models: a feasibility study comparing the baseline and mid-treatment MRI examinations

    Energy Technology Data Exchange (ETDEWEB)

    Bedair, Reem; Manavaki, Roido; Gill, Andrew B.; Abeyakoon, Oshaani; Gilbert, Fiona J. [University of Cambridge, Department of Radiology, School of Clinical Medicine, Cambridge (United Kingdom); Priest, Andrew N.; Patterson, Andrew J. [Cambridge University Hospitals NHS Foundation Trust, Department of Radiology, Addenbrookes Hospital, Cambridge (United Kingdom); McLean, Mary A. [Cambridge University Hospitals NHS Foundation Trust, Department of Radiology, Addenbrookes Hospital, Cambridge (United Kingdom); University of Cambridge, Li Ka Shing Centre, Cancer Research UK Cambridge Institute, Cambridge (United Kingdom); Graves, Martin J. [University of Cambridge, Department of Radiology, School of Clinical Medicine, Cambridge (United Kingdom); Cambridge University Hospitals NHS Foundation Trust, Department of Radiology, Addenbrookes Hospital, Cambridge (United Kingdom); Griffiths, John R. [University of Cambridge, Li Ka Shing Centre, Cancer Research UK Cambridge Institute, Cambridge (United Kingdom)

    2017-07-15

    To assess the feasibility of the mono-exponential, bi-exponential and stretched-exponential models in evaluating response of breast tumours to neoadjuvant chemotherapy (NACT) at 3 T. Thirty-six female patients (median age 53, range 32-75 years) with invasive breast cancer undergoing NACT were enrolled for diffusion-weighted MRI (DW-MRI) prior to the start of treatment. For assessment of early response, changes in parameters were evaluated on mid-treatment MRI in 22 patients. DW-MRI was performed using eight b values (0, 30, 60, 90, 120, 300, 600, 900 s/mm{sup 2}). Apparent diffusion coefficient (ADC), tissue diffusion coefficient (D{sub t}), vascular fraction (Florin), distributed diffusion coefficient (DDC) and alpha (α) parameters were derived. Then t tests compared the baseline and changes in parameters between response groups. Repeatability was assessed at inter- and intraobserver levels. All patients underwent baseline MRI whereas 22 lesions were available at mid-treatment. At pretreatment, mean diffusion coefficients demonstrated significant differences between groups (p < 0.05). At mid-treatment, percentage increase in ADC and DDC showed significant differences between responders (49 % and 43 %) and non-responders (21 % and 32 %) (p = 0.03, p = 0.04). Overall, stretched-exponential parameters showed excellent repeatability. DW-MRI is sensitive to baseline and early treatment changes in breast cancer using non-mono-exponential models, and the stretched-exponential model can potentially monitor such changes. (orig.)

  4. Assessment of early treatment response to neoadjuvant chemotherapy in breast cancer using non-mono-exponential diffusion models: a feasibility study comparing the baseline and mid-treatment MRI examinations

    International Nuclear Information System (INIS)

    Bedair, Reem; Manavaki, Roido; Gill, Andrew B.; Abeyakoon, Oshaani; Gilbert, Fiona J.; Priest, Andrew N.; Patterson, Andrew J.; McLean, Mary A.; Graves, Martin J.; Griffiths, John R.

    2017-01-01

    To assess the feasibility of the mono-exponential, bi-exponential and stretched-exponential models in evaluating response of breast tumours to neoadjuvant chemotherapy (NACT) at 3 T. Thirty-six female patients (median age 53, range 32-75 years) with invasive breast cancer undergoing NACT were enrolled for diffusion-weighted MRI (DW-MRI) prior to the start of treatment. For assessment of early response, changes in parameters were evaluated on mid-treatment MRI in 22 patients. DW-MRI was performed using eight b values (0, 30, 60, 90, 120, 300, 600, 900 s/mm"2). Apparent diffusion coefficient (ADC), tissue diffusion coefficient (D_t), vascular fraction (Florin), distributed diffusion coefficient (DDC) and alpha (α) parameters were derived. Then t tests compared the baseline and changes in parameters between response groups. Repeatability was assessed at inter- and intraobserver levels. All patients underwent baseline MRI whereas 22 lesions were available at mid-treatment. At pretreatment, mean diffusion coefficients demonstrated significant differences between groups (p < 0.05). At mid-treatment, percentage increase in ADC and DDC showed significant differences between responders (49 % and 43 %) and non-responders (21 % and 32 %) (p = 0.03, p = 0.04). Overall, stretched-exponential parameters showed excellent repeatability. DW-MRI is sensitive to baseline and early treatment changes in breast cancer using non-mono-exponential models, and the stretched-exponential model can potentially monitor such changes. (orig.)

  5. Continuous exponential martingales and BMO

    CERN Document Server

    Kazamaki, Norihiko

    1994-01-01

    In three chapters on Exponential Martingales, BMO-martingales, and Exponential of BMO, this book explains in detail the beautiful properties of continuous exponential martingales that play an essential role in various questions concerning the absolute continuity of probability laws of stochastic processes. The second and principal aim is to provide a full report on the exciting results on BMO in the theory of exponential martingales. The reader is assumed to be familiar with the general theory of continuous martingales.

  6. Exponential smoothing weighted correlations

    Science.gov (United States)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  7. Decay constants in the heavy quark limit in models a la Bakamjian and Thomas

    International Nuclear Information System (INIS)

    Morenas, V.; Le Yaouanc, A.; Oliver, L.; Pene, O.; Raynal, J.C.

    1997-07-01

    In quark models a la Bakamjian and Thomas, that yield covariance and Isgur-Wise scaling of form factors in the heavy quark limit, the decay constants f (n) and f 1/2 (n) of S-wave and P-wave mesons composed of heavy and light quarks are computed. Different Ansaetze for the dynamics of the mass operator at rest are discussed. Using phenomenological models of the spectrum with relativistic kinetic energy and regularized short distance part the decay constants in the heavy quark limit are calculated. The convergence of the heavy quark limit sum rules is also studied. (author)

  8. Role of higher-multipole deformations and noncoplanarity in the decay of the compound nucleus *220Th within the dynamical cluster-decay model

    Science.gov (United States)

    Hemdeep, Chopra, Sahila; Kaur, Arshdeep; Kaushal, Pooja; Gupta, Raj K.

    2018-04-01

    Background: The formation and decay of the *220Th compound nucleus (CN) formed via some entrance channels (16O+204Pb,40Ar+180Hf,48Ca+172Yb,82Se+138Ba ) at near barrier energies has been studied within the dynamical cluster-decay model (DCM) [Hemdeep et al. Phys. Rev. C 95, 014609 (2017), 10.1103/PhysRevC.95.044603], for quadrupole deformations (β2 i) and "optimum" orientations (θopt) of the two nuclei or decay fragments lying in the same plane (coplanar nuclei, Φ =0∘ ). Purpose: We aim to investigate the role of higher-multipole deformations, the octupole (β3 i) and hexadecupole (β4 i), and "compact" orientations (θc i) together with the noncoplanarity degree of freedom (Φc) in the noncompound nucleus (nCN) cross section, already observed in the above mentioned study with quadrupole deformations (β2 i) alone, the Φ =0∘ case. Methods: The dynamical cluster-decay model (DCM), based on the quantum mechanical fragmentation theory (QMFT), is used to analyze the decay channel cross sections σx n for various experimentally studied entrance channels. The parameter Ra (equivalently, the neck length Δ R in Ra=R1+R2+Δ R ), which fixes both the preformation and penetration paths, is used to best fit both unobserved (1 n ,2 n ) and observed (3 n -5 n ) decay channel cross sections, keeping the root-mean-square (r.m.s) deviation to the minimum, which allows us to predict the nCN effects, if any, and fusion-fission (ff) cross sections in various reactions at different CN excitation energies E*. Results: For the decay of CN *220Th, the mass fragmentation potential V (Ai ) and preformation yields P0( Ai ) show an asymmetric fission mass distribution, in agreement with one observed in experiments, independent of adding or not adding (β3 i,β4 i ), and irrespective of large changes (by 36° and 34°), respectively, in "compact" orientations θc i and noncoplanarity Φc, and also in the potential energy surface V (Ai ) in light mass (1 n -5 n ) decays. Whereas the 3 n

  9. Hadronic decays of tau-leptons in the extended Nambu-Jona-Lasinio model

    Energy Technology Data Exchange (ETDEWEB)

    Kostunin, Dmitriy [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Inst. fuer Kernphysik (IKP); Volkov, Mikhail; Arbuzov, Andrey [Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research (JINR), Dubna (Russian Federation)

    2013-07-01

    Modern experiments have collected large statistics on tau-lepton decays and electron-positron annihilation into light hadrons. Therefore it is worthwhile to confront the experimental results with the corresponding theoretical predictions. The extended Nambu-Jona-Lasinio model is a good candidate for the theoretical description of these processes. Excited states of mesons in this version of the NJL model are described with the help of polynomial form-factors with minimal number of parameters. We worked out decays and cross-sections with ππ, ππ(1300), ωπ, ηπ, η'π, ηππ, η'ππ final states. Our calculations are in satisfactory agreement with the existing experimental results. Predictions for branching ratios of suppressed decays were obtained and compared with previous theoretical estimates.

  10. Precision tests of the Standard Model with Kaon decays at CERN

    International Nuclear Information System (INIS)

    Massri, Karim

    2015-01-01

    Recent results and prospects for precision tests of the Standard Model in kaon decay-in-flight experiments at CERN are presented. A measurement of the ratio of leptonic decay rates of the charged kaon at the level of 0.4% precision constrains the parameter space of new physics models with extended Higgs sector, a fourth generation of quarks and leptons or sterile neutrinos. Searches for heavy neutrino mass states and the dark photon in the ∼ 100 MeV/c 2 mass range based on samples collected in 2003-2007 are in progress and prospects will be discussed. The NA62 experiment, starting in 2014, will search for a range of lepton number and lepton flavour violating decays of the charged kaon and the neutral pion at improved sensitivities down to ∼ 10 -12 , which will probe new physics scenarios involving heavy Majorana neutrinos or R-parity violating SUSY. (paper)

  11. Flavour changing decays of Z0 in supersymmetric models

    International Nuclear Information System (INIS)

    Gamberini, G.; Ridolfi, G.

    1987-01-01

    The possible existence of detectable flavour-changing branching modes of the Z 0 boson is examined in the context of supersymmetric models of currrent interest. An explicit calculation shows that in the so-called minimal version of the supersymmetric standard model the branching ratios for Z 0 →banti s or tanti c are not larger than in the standard model itself and are as such unobservable. On the contrary, we find that in a recently proposed extension of the supersymmetric standard model the mode Z 0 →tanti c may be at the order of being detectable. (orig.)

  12. Scalar boson decays to tau leptons: in the standard model and beyond

    CERN Document Server

    Caillol, Cecile; Mohammadi, Abdollah

    2016-01-01

    This thesis presents a study of the scalar sector in the standard model (SM), as well asdifferent searches for an extended scalar sector in theories beyond the standard model(BSM). All analyses have in common the fact that at least one scalar boson decays to apair of tau leptons. The results exploit the data collected by the CMS detector duringLHC Run-1, in proton-proton collisions with a center-of-mass energy of 7 or 8 TeV.The particle discovered in 2012, H, looks compatible with a SM Brout-Englert-Higgsboson, but this statement is driven by the H → γγ and H → ZZ decay modes. TheH → τ + τ − decay mode is the most sensitive fermionic decay channel, and allows to testthe Yukawa couplings of the new particle. The search for the SM scalar boson decaying totau leptons, and produced in association with a massive vector boson W or Z, is describedin this thesis. Even though a good background rejection can be achieved by selecting theleptons originating from the vector boson, Run-1 data are not sensitive...

  13. Network clustering analysis using mixture exponential-family random graph models and its application in genetic interaction data.

    Science.gov (United States)

    Wang, Yishu; Zhao, Hongyu; Deng, Minghua; Fang, Huaying; Yang, Dejie

    2017-08-24

    Epistatic miniarrary profile (EMAP) studies have enabled the mapping of large-scale genetic interaction networks and generated large amounts of data in model organisms. It provides an incredible set of molecular tools and advanced technologies that should be efficiently understanding the relationship between the genotypes and phenotypes of individuals. However, the network information gained from EMAP cannot be fully exploited using the traditional statistical network models. Because the genetic network is always heterogeneous, for example, the network structure features for one subset of nodes are different from those of the left nodes. Exponentialfamily random graph models (ERGMs) are a family of statistical models, which provide a principled and flexible way to describe the structural features (e.g. the density, centrality and assortativity) of an observed network. However, the single ERGM is not enough to capture this heterogeneity of networks. In this paper, we consider a mixture ERGM (MixtureEGRM) networks, which model a network with several communities, where each community is described by a single EGRM.

  14. MESOI Version 2.0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the ground and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables

  15. Aspects of hadronic B decays in and beyond the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Vernazza, Leonardo

    2009-10-16

    In this thesis we address various issues of hadronic B decays, in the Standard Model and beyond. Concerning the first aspect, we focus on the problem of understanding better low energy strong interactions in these decays. We consider in particular B decays into a charmonium state and a light meson. We develop a complete treatment of low energy QCD interaction in the context of QCD factorization, treating the charmonia as nonrelativistic bound states. This allows us to demonstrate that, in the heavy-quark limit, a perturbative treatment of these decays is possible, even in case of decays into P-waves, which were found to be non-factorizing in previous studies. We achieve this, including in the analysis the bound state scales of charmonium, which in turn requires to consider charmonium production through colour-octet operators. Although there are very large uncertainties, we find reasonable parameter choices, where the main features of the data - large corrections to (naive) factorization and suppression of the {chi}{sub c2} and h{sub c} final states - are reproduced though the suppression of {chi}{sub c2} is not as strong as seen in the data. Our results also provide an example, where an endpoint divergence in hard spectator-scattering factorizes and is absorbed into colour-octet operator matrix elements. The second part of the thesis is devoted to a series of analyses of non-leptonic B decays in extensions of the Standard Model. The aim of these studies is twofold: on one hand we are interested in testing the sensitivity of these decays to new physics; on the other hand, we look for actual discrepancies between theory predictions and experimental results, trying to explain them in the context of a new physics model. Concerning the first aspect, we consider two well-motivated new physics scenarios, in which large deviations from the Standard Model are expected, i.e. the MSSM with large tan {beta}, and a supersymmetric GUT in which the large neutrino mixing angles

  16. Offline estimation of decay time for an optical cavity with a low pass filter cavity model.

    Science.gov (United States)

    Kallapur, Abhijit G; Boyson, Toby K; Petersen, Ian R; Harb, Charles C

    2012-08-01

    This Letter presents offline estimation results for the decay-time constant for an experimental Fabry-Perot optical cavity for cavity ring-down spectroscopy (CRDS). The cavity dynamics are modeled in terms of a low pass filter (LPF) with unity DC gain. This model is used by an extended Kalman filter (EKF) along with the recorded light intensity at the output of the cavity in order to estimate the decay-time constant. The estimation results using the LPF cavity model are compared to those obtained using the quadrature model for the cavity presented in previous work by Kallapur et al. The estimation process derived using the LPF model comprises two states as opposed to three states in the quadrature model. When considering the EKF, this means propagating two states and a (2×2) covariance matrix using the LPF model, as opposed to propagating three states and a (3×3) covariance matrix using the quadrature model. This gives the former model a computational advantage over the latter and leads to faster execution times for the corresponding EKF. It is shown in this Letter that the LPF model for the cavity with two filter states is computationally more efficient, converges faster, and is hence a more suitable method than the three-state quadrature model presented in previous work for real-time estimation of the decay-time constant for the cavity.

  17. How rapidly does the excess risk of lung cancer decline following quitting smoking? A quantitative review using the negative exponential model.

    Science.gov (United States)

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2013-10-01

    The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Predictive Abuse Detection for a PLC Smart Lighting Network Based on Automatically Created Models of Exponential Smoothing

    Directory of Open Access Journals (Sweden)

    Tomasz Andrysiak

    2017-01-01

    Full Text Available One of the basic elements of a Smart City is the urban infrastructure management system, in particular, systems of intelligent street lighting control. However, for their reliable operation, they require special care for the safety of their critical communication infrastructure. This article presents solutions for the detection of different kinds of abuses in network traffic of Smart Lighting infrastructure, realized by Power Line Communication technology. Both the structure of the examined Smart Lighting network and its elements are described. The article discusses the key security problems which have a direct impact on the correct performance of the Smart Lighting critical infrastructure. In order to detect an anomaly/attack, we proposed the usage of a statistical model to obtain forecasting intervals. Then, we calculated the value of the differences between the forecast in the estimated traffic model and its real variability so as to detect abnormal behavior (which may be symptomatic of an abuse attempt. Due to the possibility of appearance of significant fluctuations in the real network traffic, we proposed a procedure of statistical models update which is based on the criterion of interquartile spacing. The results obtained during the experiments confirmed the effectiveness of the presented misuse detection method.

  19. Decay patterns of multi-quasiparticle bands—a model independent test of chiral symmetry

    International Nuclear Information System (INIS)

    Lawrie, E A

    2017-01-01

    Nuclear chiral systems exhibit chiral symmetry bands, built on left-handed and right-handed angular momentum nucleon configurations. The experimental search for such chiral systems revealed a number of suitable candidates, however an unambiguous identification of nuclear chiral symmetry is still outstanding. In this work it is shown that the decay patterns of chiral bands built on multi-quasiparticle configurations are different from those involving different single-particle configurations. It is suggested to use the observed decay patterns of chiral candidates as a new model-independent test of chiral symmetry. (paper)

  20. CP asymmetry in tau slepton decay in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Yang Weimin; Du Dongsheng

    2002-01-01

    We investigate CP violation asymmetry in the decay of a tau slepton into a tau neutrino and a chargino in the minimal supersymmetric standard model. The new source of CP violation is the complex mixing in the tau slepton sector. The rate asymmetry between the decays of the tau slepton and its CP conjugate process can be of the order of 10 -3 in some region of the parameter space of the minimal supergravity scenario, which will possibly be detectable in near-future collider experiments

  1. Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses

    International Nuclear Information System (INIS)

    Takahashi, K.; Mathews, G.J.; Bloom, S.D.

    1985-01-01

    Examples of large-basis shell-model calculations of Gamow-Teller β-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus 99 Tc; and (2) the GT-strength function for the neutron-rich nucleus 130 Cd, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems. 23 refs., 3 figs

  2. Configuration splitting and gamma-decay transition rates in the two-group shell model

    International Nuclear Information System (INIS)

    Isakov, V. I.

    2015-01-01

    Expressions for reduced gamma-decay transition rates were obtained on the basis of the twogroup configuration model for the case of transitions between particles belonging to identical groups of nucleons. In practical applications, the present treatment is the most appropriate for describing decays for odd–odd nuclei in the vicinity of magic nuclei or for nuclei where the corresponding subshells stand out in energy. Also, a simple approximation is applicable to describing configuration splitting in those cases. The present calculations were performed for nuclei whose mass numbers are close to A ∼ 90, including N = 51 odd—odd isotones

  3. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    Science.gov (United States)

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  4. On the Decay of Correlations in Non-Analytic SO(n)-Symmetric Models

    Science.gov (United States)

    Naddaf, Ali

    We extend the method of complex translations which was originally employed by McBryan-Spencer [2] to obtain a decay rate for the two point function in two-dimensional SO(n)-symmetric models with non-analytic Hamiltonians for $.

  5. The analysis of B-d ->(eta, eta ')l(+)l(-) decays in the standard model

    NARCIS (Netherlands)

    Erkol, G; Turan, G

    We study the differential branching ratio, the branching ratio and the CP-violating asymmetry for the exclusive B-d --> (eta,eta')l(+)l(-) decays in the standard model. We deduce the B-d --> (eta,eta') form factors from the form factors of B --> pi available in the literature, by using the SU(3)(F)

  6. Production and decay of neutralinos in the nonminimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Franke, F.

    1995-01-01

    In this thesis after a presentation of the nonminimal supersymmetric standard model the lower mass limits for neutralinos and Higgs bosons are calculated. Then some typical scenarios for the study of the neutralino production and decay at LEP2 are constructed, for which the cross sections are calculated. (HSI)

  7. An evaluation of nodalization/decay heat/ volatile fission product release models in ISAAC code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Yong Mann; Park, Soo Yong; Kim, Dong Ha

    2003-03-01

    An ISAAC computer code, which was developed for a Level-2 PSA during 1995, has developed mainly with fundamental models for CANDU-specific severe accident progression and also the accident-analyzing experiences are limited to Level-2 PSA purposes. Hence the system nodalization model, decay model and volatile fission product release model, which are known to affect fission product behavior directly or indirectly, are evaluated to both enhance understanding for basic models and accumulate accident-analyzing experiences. As a research strategy, sensitivity studies of model parameters and sensitivity coefficients are performed. According to the results from core nodalization sensitivity study, an original 3x3 nodalization (per loop) method which groups horizontal fuel channels into 12 representative channels, is evaluated to be sufficient for an optimal scheme because detailed nodalization methods have no large effect on fuel thermal-hydraulic behavior, total accident progression and fission product behavior. As ANSI/ANS standard model for decay heat prediction after reactor trip has no needs for further model evaluation due to both wide application on accident analysis codes and good comparison results with the ORIGEN code, ISAAC calculational results of decay heat are used as they are. In addition, fission product revaporization in a containment which is caused by the embedded decay heat, is demonstrated. The results for the volatile fission product release model are analyzed. In case of early release, the IDCOR model with an in-vessel Te release option shows the most conservative results and for the late release case, NUREG-0772 model shows the most conservative results. Considering both early and late release, the IDCOR model with an in-vessel Te bound option shows mitigated conservative results.

  8. CP violation in beauty decays the standard model paradigm of large effects

    CERN Document Server

    Bigi, Ikaros I.Y.

    1994-01-01

    The Standard Model contains a natural source for CP asymmetries in weak decays, which is described by the KM mechanism. Beyond \\epsilon _K it generates only elusive manifestations of CP violation in {\\em light-}quark systems. On the other hand it naturally leads to large asymmetries in certain non-leptonic beauty decays. In particular when B^0-\\bar B^0 oscillations are involved, theoretical uncertainties in the hadronic matrix elements either drop out or can be controlled, and one predicts asymmetries well in excess of 10\\% with high parametric reliability. It is briefly described how the KM triangle can be determined experimentally and then subjected to sensitive consistency tests. Any failure would constitute indirect, but unequivocal evidence for the intervention of New Physics; some examples are sketched. Any outcome of a comprehensive program of CP studies in B decays -- short of technical failure -- will provide us with fundamental and unique insights into nature's design.

  9. Rare decays B → Xd+γ in the standard model

    International Nuclear Information System (INIS)

    Ali, A.; Greub, C.

    1992-03-01

    We present an estimate of the inclusive decay rate and photon energy- and hadron massspectrum for the CKM-suppressed radiative rare decays B→ X d + γ, based on perturbative QCD and a phenomenological model for the B-meson wave function (here X d denotes non-strange hadrons). Present constraints on vertical strokeV td vertical stroke are used to predict BR(B → X d + γ) = (0.6 - 3) x 10 -5 for the top quark mass in the range 100 GeV t d ) * + γ). The importance of measuring these decays in determining the CKM matrix element vertical strokeV td vertical stroke is emphasized. (orig.)

  10. Study of the Standard Model Higgs boson decaying to taus at CMS

    CERN Document Server

    Botta, Valeria

    2017-01-01

    The most recent search for the Standard Model Higgs boson decaying to a pair of $\\tau$ leptons is performed using proton-proton collision events at a centre-of-mass energy of 13~TeV, recorded by the CMS experiment at the LHC. The full 2016 dataset, corresponding to an integrated luminosity of 35.9~fb$^{-1}$, has been analysed. The Higgs boson signal in the $\\tau^{+}\\tau^{-}$ decay mode is observed with a significance of 4.9 standard deviations, to be compared to an expected significance of 4.7 standard deviations. This measurement is the first observation of the Higgs boson decay into fermions by a single experiment.

  11. On the gluonic correction to lepton-pair decays in a relativistic quarkonium model

    International Nuclear Information System (INIS)

    Ito, Hitoshi

    1987-01-01

    The gluonic correction to the leptonic decay of the heavy vector meson is investigated by using the perturbation theory to the order α s . The on-mass-shell approximation is assumed for the constituent quarks so that we assure the gauge independence of the correction. The decay rates in the model based on the Bethe-Salpeter equation are also shown, in which the gluonic correction with a high-momentum cutoff is calculated for the off-shell quarks. It is shown that the static approximation to the correction factor (1 - 16α s /3π) is not adequate and the gluonic correction does not suppress but enhance the decay rates of the ground states for the c anti c and b anti b systems. (author)

  12. Estimating the decline in excess risk of chronic obstructive pulmonary disease following quitting smoking - a systematic review based on the negative exponential model.

    Science.gov (United States)

    Lee, Peter N; Fry, John S; Forey, Barbara A

    2014-03-01

    We quantified the decline in COPD risk following quitting using the negative exponential model, as previously carried out for other smoking-related diseases. We identified 14 blocks of RRs (from 11 studies) comparing current smokers, former smokers (by time quit) and never smokers, some studies providing sex-specific blocks. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We estimated the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block, except for one where no decline with quitting was evident, and H was not estimable. For the remaining 13 blocks, goodness-of-fit to the model was generally adequate, the combined estimate of H being 13.32 (95% CI 11.86-14.96) years. There was no heterogeneity in H, overall or by various studied sources. Sensitivity analyses allowing for reverse causation or different assumed times for the final quitting period little affected the results. The model summarizes quitting data well. The estimate of 13.32years is substantially larger than recent estimates of 4.40years for ischaemic heart disease and 4.78years for stroke, and also larger than the 9.93years for lung cancer. Heterogeneity was unimportant for COPD, unlike for the other three diseases. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Estimating the decline in excess risk of cerebrovascular disease following quitting smoking--a systematic review based on the negative exponential model.

    Science.gov (United States)

    Lee, Peter N; Fry, John S; Thornton, Alison J

    2014-02-01

    We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Minimal variance hedging of natural gas derivatives in exponential Lévy models: Theory and empirical performance

    International Nuclear Information System (INIS)

    Ewald, Christian-Oliver; Nawar, Roy; Siu, Tak Kuen

    2013-01-01

    We consider the problem of hedging European options written on natural gas futures, in a market where prices of traded assets exhibit jumps, by trading in the underlying asset. We provide a general expression for the hedging strategy which minimizes the variance of the terminal hedging error, in terms of stochastic integral representations of the payoffs of the options involved. This formula is then applied to compute hedge ratios for common options in various models with jumps, leading to easily computable expressions. As a benchmark we take the standard Black–Scholes and Merton delta hedges. We show that in natural gas option markets minimal variance hedging with underlying consistently outperform the benchmarks by quite a margin. - Highlights: ► We derive hedging strategies for European type options written on natural gas futures. ► These are tested empirically using Henry Hub natural gas futures and options data. ► We find that our hedges systematically outperform classical benchmarks

  15. Deformed exponentials and portfolio selection

    Science.gov (United States)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  16. Coarse Grained Exponential Variational Autoencoders

    KAUST Repository

    Sun, Ke

    2017-02-25

    Variational autoencoders (VAE) often use Gaussian or category distribution to model the inference process. This puts a limit on variational learning because this simplified assumption does not match the true posterior distribution, which is usually much more sophisticated. To break this limitation and apply arbitrary parametric distribution during inference, this paper derives a \\\\emph{semi-continuous} latent representation, which approximates a continuous density up to a prescribed precision, and is much easier to analyze than its continuous counterpart because it is fundamentally discrete. We showcase the proposition by applying polynomial exponential family distributions as the posterior, which are universal probability density function generators. Our experimental results show consistent improvements over commonly used VAE models.

  17. Test Exponential Pile

    Science.gov (United States)

    Fermi, Enrico

    The Patent contains an extremely detailed description of an atomic pile employing natural uranium as fissile material and graphite as moderator. It starts with the discussion of the theory of the intervening phenomena, in particular the evaluation of the reproduction or multiplication factor, K, that is the ratio of the number of fast neutrons produced in one generation by the fissions to the original number of fast neutrons, in a system of infinite size. The possibility of having a self-maintaining chain reaction in a system of finite size depends both on the facts that K is greater than unity and the overall size of the system is sufficiently large to minimize the percentage of neutrons escaping from the system. After the description of a possible realization of such a pile (with many detailed drawings), the various kinds of neutron losses in a pile are depicted. Particularly relevant is the reported "invention" of the exponential experiment: since theoretical calculations can determine whether or not a chain reaction will occur in a give system, but can be invalidated by uncertainties in the parameters of the problem, an experimental test of the pile is proposed, aimed at ascertaining if the pile under construction would be divergent (i.e. with a neutron multiplication factor K greater than 1) by making measurements on a smaller pile. The idea is to measure, by a detector containing an indium foil, the exponential decrease of the neutron density along the length of a column of uranium-graphite lattice, where a neutron source is placed near its base. Such an exponential decrease is greater or less than that expected due to leakage, according to whether the K factor is less or greater than 1, so that this experiment is able to test the criticality of the pile, its accuracy increasing with the size of the column. In order to perform this measure a mathematical description of the effect of neutron production, diffusion, and absorption on the neutron density in the

  18. Neutrinoless double beta decay in type I+II seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Borah, Debasish [Department of Physics, Tezpur University,Tezpur-784028 (India); Dasgupta, Arnab [Institute of Physics, Sachivalaya Marg,Bhubaneshwar-751005 (India)

    2015-11-30

    We study neutrinoless double beta decay in left-right symmetric extension of the standard model with type I and type II seesaw origin of neutrino masses. Due to the enhanced gauge symmetry as well as extended scalar sector, there are several new physics sources of neutrinoless double beta decay in this model. Ignoring the left-right gauge boson mixing and heavy-light neutrino mixing, we first compute the contributions to neutrinoless double beta decay for type I and type II dominant seesaw separately and compare with the standard light neutrino contributions. We then repeat the exercise by considering the presence of both type I and type II seesaw, having non-negligible contributions to light neutrino masses and show the difference in results from individual seesaw cases. Assuming the new gauge bosons and scalars to be around a TeV, we constrain different parameters of the model including both heavy and light neutrino masses from the requirement of keeping the new physics contribution to neutrinoless double beta decay amplitude below the upper limit set by the GERDA experiment and also satisfying bounds from lepton flavor violation, cosmology and colliders.

  19. Modeling effects of DO and SRT on activated sludge decay and production.

    Science.gov (United States)

    Liu, Guoqiang; Wang, Jianmin

    2015-09-01

    The effect of dissolved oxygen (DO) on the endogenous decay of active heterotrophic biomass and the hydrolysis of cell debris were studied. With the inclusion of a hydrolysis process for the cell debris, mathematical models that are capable of quantifying the effects of DO and sludge retention time (SRT) on concentrations of active biomass and cell debris in activated sludge are presented. By modeling the biomass cultivated with unlimited DO, the values of endogenous decay coefficient for heterotrophic biomass, the hydrolysis constant of cell debris, and the fraction of decayed biomass that became cell debris were determined to be 0.38 d(-1), 0.013 d(-1), and 0.28, respectively. Results from modeling the biomass cultivated under different DO conditions suggested that the experimental low DO (∼0.2 mg/L) did not inhibit the endogenous decay of heterotrophic biomass, but significantly inhibited the hydrolysis of cell debris with a half-velocity constant value of 2.1 mg/L. Therefore, the increase in sludge production with low DO was mainly contributed by cell debris rather than the active heterotrophic biomass. Maximizing sludge production during aerobic wastewater treatment could reduce aeration energy consumption and improve biogas energy recovery potential. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Analysis and Modeling for Short- to Medium-Term Load Forecasting Using a Hybrid Manifold Learning Principal Component Model and Comparison with Classical Statistical Models (SARIMAX, Exponential Smoothing and Artificial Intelligence Models (ANN, SVM: The Case of Greek Electricity Market

    Directory of Open Access Journals (Sweden)

    George P. Papaioannou

    2016-08-01

    Full Text Available In this work we propose a new hybrid model, a combination of the manifold learning Principal Components (PC technique and the traditional multiple regression (PC-regression, for short and medium-term forecasting of daily, aggregated, day-ahead, electricity system-wide load in the Greek Electricity Market for the period 2004–2014. PC-regression is shown to effectively capture the intraday, intraweek and annual patterns of load. We compare our model with a number of classical statistical approaches (Holt-Winters exponential smoothing of its generalizations Error-Trend-Seasonal, ETS models, the Seasonal Autoregressive Moving Average with exogenous variables, Seasonal Autoregressive Integrated Moving Average with eXogenous (SARIMAX model as well as with the more sophisticated artificial intelligence models, Artificial Neural Networks (ANN and Support Vector Machines (SVM. Using a number of criteria for measuring the quality of the generated in-and out-of-sample forecasts, we have concluded that the forecasts of our hybrid model outperforms the ones generated by the other model, with the SARMAX model being the next best performing approach, giving comparable results. Our approach contributes to studies aimed at providing more accurate and reliable load forecasting, prerequisites for an efficient management of modern power systems.

  1. Evaluation of Inhaled Versus Deposited Dose Using the Exponential Dose-Response Model for Inhalational Anthrax in Nonhuman Primate, Rabbit, and Guinea Pig.

    Science.gov (United States)

    Gutting, Bradford W; Rukhin, Andrey; Mackie, Ryan S; Marchette, David; Thran, Brandolyn

    2015-05-01

    The application of the exponential model is extended by the inclusion of new nonhuman primate (NHP), rabbit, and guinea pig dose-lethality data for inhalation anthrax. Because deposition is a critical step in the initiation of inhalation anthrax, inhaled doses may not provide the most accurate cross-species comparison. For this reason, species-specific deposition factors were derived to translate inhaled dose to deposited dose. Four NHP, three rabbit, and two guinea pig data sets were utilized. Results from species-specific pooling analysis suggested all four NHP data sets could be pooled into a single NHP data set, which was also true for the rabbit and guinea pig data sets. The three species-specific pooled data sets could not be combined into a single generic mammalian data set. For inhaled dose, NHPs were the most sensitive (relative lowest LD50) species and rabbits the least. Improved inhaled LD50 s proposed for use in risk assessment are 50,600, 102,600, and 70,800 inhaled spores for NHP, rabbit, and guinea pig, respectively. Lung deposition factors were estimated for each species using published deposition data from Bacillus spore exposures, particle deposition studies, and computer modeling. Deposition was estimated at 22%, 9%, and 30% of the inhaled dose for NHP, rabbit, and guinea pig, respectively. When the inhaled dose was adjusted to reflect deposited dose, the rabbit animal model appears the most sensitive with the guinea pig the least sensitive species. © 2014 Society for Risk Analysis.

  2. Two-stage unified stretched-exponential model for time-dependence of threshold voltage shift under positive-bias-stresses in amorphous indium-gallium-zinc oxide thin-film transistors

    Science.gov (United States)

    Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In

    2017-08-01

    In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.

  3. Cosmology with exponential potentials

    International Nuclear Information System (INIS)

    Kehagias, Alex; Kofinas, Georgios

    2004-01-01

    We examine in the context of general relativity the dynamics of a spatially flat Robertson-Walker universe filled with a classical minimally coupled scalar field φ of exponential potential V(φ) ∼ exp(-μφ) plus pressureless baryonic matter. This system is reduced to a first-order ordinary differential equation for Ω φ (w φ ) or q(w φ ), providing direct evidence on the acceleration/deceleration properties of the system. As a consequence, for positive potentials, passage into acceleration not at late times is generically a feature of the system for any value of μ, even when the late-times attractors are decelerating. Furthermore, the structure formation bound, together with the constraints Ω m0 ∼ 0.25 - 0.3, -1 ≤ w φ0 ≤ -0.6, provides, independently of initial conditions and other parameters, the necessary condition 0 N , while the less conservative constraint -1 ≤ w φ ≤ -0.93 gives 0 N . Special solutions are found to possess intervals of acceleration. For the almost cosmological constant case w φ ∼ -1, the general relation Ω φ (w φ ) is obtained. The generic (nonlinearized) late-times solution of the system in the plane (w φ , Ω φ ) or (w φ , q) is also derived

  4. OPINION: Safe exponential manufacturing

    Science.gov (United States)

    Phoenix, Chris; Drexler, Eric

    2004-08-01

    In 1959, Richard Feynman pointed out that nanometre-scale machines could be built and operated, and that the precision inherent in molecular construction would make it easy to build multiple identical copies. This raised the possibility of exponential manufacturing, in which production systems could rapidly and cheaply increase their productive capacity, which in turn suggested the possibility of destructive runaway self-replication. Early proposals for artificial nanomachinery focused on small self-replicating machines, discussing their potential productivity and their potential destructiveness if abused. In the light of controversy regarding scenarios based on runaway replication (so-called 'grey goo'), a review of current thinking regarding nanotechnology-based manufacturing is in order. Nanotechnology-based fabrication can be thoroughly non-biological and inherently safe: such systems need have no ability to move about, use natural resources, or undergo incremental mutation. Moreover, self-replication is unnecessary: the development and use of highly productive systems of nanomachinery (nanofactories) need not involve the construction of autonomous self-replicating nanomachines. Accordingly, the construction of anything resembling a dangerous self-replicating nanomachine can and should be prohibited. Although advanced nanotechnologies could (with great difficulty and little incentive) be used to build such devices, other concerns present greater problems. Since weapon systems will be both easier to build and more likely to draw investment, the potential for dangerous systems is best considered in the context of military competition and arms control.

  5. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    Science.gov (United States)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  6. Discussion of the 3P0 model applied to the decay of mesons into two mesons

    International Nuclear Information System (INIS)

    Bonnaz, R.; Silvestre-Brac, B.

    1999-01-01

    The 3 P 0 model for the decay of a meson into two mesons is revisited. In particular, the formalism is extended in order to deal with an arbitrary form for the creation vertex and with the exact meson wave functions. A careful analysis of both effects is performed and discussed. The model is then applied to a large class of transitions known experimentally. Two types of quark-antiquark potentials have been tested and compared. (author)

  7. Cluster radioactive decay within the preformed cluster model using relativistic mean-field theory densities

    International Nuclear Information System (INIS)

    Singh, BirBikram; Patra, S. K.; Gupta, Raj K.

    2010-01-01

    We have studied the (ground-state) cluster radioactive decays within the preformed cluster model (PCM) of Gupta and collaborators [R. K. Gupta, in Proceedings of the 5th International Conference on Nuclear Reaction Mechanisms, Varenna, edited by E. Gadioli (Ricerca Scientifica ed Educazione Permanente, Milano, 1988), p. 416; S. S. Malik and R. K. Gupta, Phys. Rev. C 39, 1992 (1989)]. The relativistic mean-field (RMF) theory is used to obtain the nuclear matter densities for the double folding procedure used to construct the cluster-daughter potential with M3Y nucleon-nucleon interaction including exchange effects. Following the PCM approach, we have deduced empirically the preformation probability P 0 emp from the experimental data on both the α- and exotic cluster-decays, specifically of parents in the trans-lead region having doubly magic 208 Pb or its neighboring nuclei as daughters. Interestingly, the RMF-densities-based nuclear potential supports the concept of preformation for both the α and heavier clusters in radioactive nuclei. P 0 α(emp) for α decays is almost constant (∼10 -2 -10 -3 ) for all the parent nuclei considered here, and P 0 c(emp) for cluster decays of the same parents decrease with the size of clusters emitted from different parents. The results obtained for P 0 c(emp) are reasonable and are within two to three orders of magnitude of the well-accepted phenomenological model of Blendowske-Walliser for light clusters.

  8. Multivariate Matrix-Exponential Distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    2010-01-01

    be written as linear combinations of the elements in the exponential of a matrix. For this reason we shall refer to multivariate distributions with rational Laplace transform as multivariate matrix-exponential distributions (MVME). The marginal distributions of an MVME are univariate matrix......-exponential distributions. We prove a characterization that states that a distribution is an MVME distribution if and only if all non-negative, non-null linear combinations of the coordinates have a univariate matrix-exponential distribution. This theorem is analog to a well-known characterization theorem...

  9. Comparison between types I and II epithelial ovarian cancer using histogram analysis of monoexponential, biexponential, and stretched-exponential diffusion models.

    Science.gov (United States)

    Wang, Feng; Wang, Yuxiang; Zhou, Yan; Liu, Congrong; Xie, Lizhi; Zhou, Zhenyu; Liang, Dong; Shen, Yang; Yao, Zhihang; Liu, Jianyu

    2017-12-01

    To evaluate the utility of histogram analysis of monoexponential, biexponential, and stretched-exponential models to a dualistic model of epithelial ovarian cancer (EOC). Fifty-two patients with histopathologically proven EOC underwent preoperative magnetic resonance imaging (MRI) (including diffusion-weighted imaging [DWI] with 11 b-values) using a 3.0T system and were divided into two groups: types I and II. Apparent diffusion coefficient (ADC), true diffusion coefficient (D), pseudodiffusion coefficient (D*), perfusion fraction (f), distributed diffusion coefficient (DDC), and intravoxel water diffusion heterogeneity (α) histograms were obtained based on solid components of the entire tumor. The following metrics of each histogram were compared between two types: 1) mean; 2) median; 3) 10th percentile and 90th percentile. Conventional MRI morphological features were also recorded. Significant morphological features for predicting EOC type were maximum diameter (P = 0.007), texture of lesion (P = 0.001), and peritoneal implants (P = 0.001). For ADC, D, f, DDC, and α, all metrics were significantly lower in type II than type I (P histogram metrics of ADC, D, and DDC had significantly higher area under the receiver operating characteristic curve values than those of f and α (P histogram analysis. ADC, D, and DDC have better performance than f and α; f and α may provide additional information. 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2017;46:1797-1809. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Stable exponential cosmological solutions with 3- and l-dimensional factor spaces in the Einstein-Gauss-Bonnet model with a Λ-term

    Energy Technology Data Exchange (ETDEWEB)

    Ivashchuk, V.D. [Peoples' Friendship University of Russia (RUDN University), Institute of Gravitation and Cosmology, Moscow (Russian Federation); Center for Gravitation and Fundamental Metrology, VNIIMS, Moscow (Russian Federation); Kobtsev, A.A. [Institute for Nuclear Research, RAS, Moscow (Russian Federation)

    2018-02-15

    A D-dimensional gravitational model with a Gauss-Bonnet term and the cosmological term Λ is studied. We assume the metrics to be diagonal cosmological ones. For certain fine-tuned Λ, we find a class of solutions with exponential time dependence of two scale factors, governed by two Hubble-like parameters H > 0 and h, corresponding to factor spaces of dimensions 3 and l > 2, respectively and D = 1 + 3 + l. The fine-tuned Λ = Λ(x, l, α) depends upon the ratio h/H = x, l and the ratio α = α{sub 2}/α{sub 1} of two constants (α{sub 2} and α{sub 1}) of the model. For fixed Λ, α and l > 2 the equation Λ(x, l, α) = Λ is equivalent to a polynomial equation of either fourth or third order and may be solved in radicals (the example l = 3 is presented). For certain restrictions on x we prove the stability of the solutions in a class of cosmological solutions with diagonal metrics. A subclass of solutions with small enough variation of the effective gravitational constant G is considered. It is shown that all solutions from this subclass are stable. (orig.)

  11. The evolution of stellar exponential discs

    NARCIS (Netherlands)

    Ferguson, AMN; Clarke, CJ

    2001-01-01

    Models of disc galaxies which invoke viscosity-driven radial flows have long been known to provide a natural explanation for the origin of stellar exponential discs, under the assumption that the star formation and viscous time-scales are comparable. We present models which invoke simultaneous star

  12. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  13. Standard model treatment of the radiative corrections to the neutron β-decay

    International Nuclear Information System (INIS)

    Bunatyan, G.G.

    2003-01-01

    Starting with the basic Lagrangian of the Standard Model, the radiative corrections to the neutron β-decay are acquired. The electroweak interactions are consistently taken into consideration amenably to the Weinberg-Salam theory. The effect of the strong quark-quark interactions on the neutron β-decay is parametrized by introducing the nucleon electromagnetic form factors and the weak nucleon transition current specified by the form factors g V , g A , ... The radiative corrections to the total decay probability W and to the asymmetry coefficient of the momentum distribution A are obtained to constitute δW ∼ 8.7 %, δA ∼ -2 %. The contribution to the radiative corrections due to allowance for the nucleon form factors and the nucleon excited states amounts up to a few per cent of the whole value of the radiative corrections. The ambiguity in description of the nucleon compositeness is surely what causes the uncertainties ∼ 0.1 % in evaluation of the neutron β-decay characteristics. For now, this puts bounds to the precision attainable in obtaining the element V ud of the CKM matrix and the g V , g A , ... values from experimental data processing

  14. Sub-exponential spin-boson decoherence in a finite bath

    International Nuclear Information System (INIS)

    Wong, V.; Gruebele, M.

    2002-01-01

    We investigate the decoherence of a two-level system coupled to harmonic baths of 4-21 degrees of freedom, to baths with internal anharmonic couplings, and to baths with an additional 'solvent shell' (modes coupled to other bath modes, but not to the system). The discrete spectral densities are chosen to mimic the highly fluctuating spectral densities computed for real systems such as proteins. System decoherence is computed by exact quantum dynamics. With realistic parameter choices (finite temperature, reasonably large couplings), sub-exponential decoherence of the two-level system is observed. Empirically, the time-dependence of decoherence can be fitted by power laws with small exponents. Intrabath anharmonic couplings are more effective at smoothing the spectral density and restoring exponential dynamics, than additional bath modes or solvent shells. We conclude that at high temperature, the most important physical basis for exponential decays is anharmonicity of those few bath modes interacting most strongly with the system, not a large number of oscillators interacting with the system. We relate the current numerical simulations to models of anharmonically coupled oscillators, which also predict power law dynamics. The potential utility of power law decays in quantum computation and condensed phase coherent control are also discussed

  15. Transverse exponential stability and applications

    NARCIS (Netherlands)

    Andrieu, Vincent; Jayawardhana, Bayu; Praly, Laurent

    2016-01-01

    We investigate how the following properties are related to each other: i) A manifold is “transversally” exponentially stable; ii) The “transverse” linearization along any solution in the manifold is exponentially stable; iii) There exists a field of positive definite quadratic forms whose

  16. Higgs production and decay in models of a warped extra dimension with a bulk Higgs

    International Nuclear Information System (INIS)

    Archer, Paul R.; Carena, Marcela; Carmona, Adrian; Neubert, Matthias

    2015-01-01

    Warped extra-dimension models in which the Higgs boson is allowed to propagate in the bulk of a compact AdS 5 space are conjectured to be dual to models featuring a partially composite Higgs boson. They offer a framework with which to investigate the implications of changing the scaling dimension of the Higgs operator, which can be used to reduce the constraints from electroweak precision data. In the context of such models, we calculate the cross section for Higgs production in gluon fusion and the H → γγ decay rate and show that they are finite (at one-loop order) as a consequence of gauge invariance. The extended scalar sector comprising the Kaluza-Klein excitations of the Standard Model scalars is constructed in detail. The largest effects are due to virtual KK fermions, whose contributions to the cross section and decay rate introduce a quadratic sensitivity to the maximum allowed value y * of the random complex entries of the 5D anarchic Yukawa matrices. We find an enhancement of the gluon-fusion cross section and a reduction of the H → γγ rate as well as of the tree-level Higgs couplings to fermions and electroweak gauge bosons. As a result, we perform a detailed study of the correlated signal strengths for different production mechanisms and decay channels as functions of y * , the mass scale of Kaluza-Klein resonances and the scaling dimension of the composite Higgs operator

  17. The impact of accelerating faster than exponential population growth on genetic variation.

    Science.gov (United States)

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

  18. Possible stretched exponential parametrization for humidity absorption in polymers.

    Science.gov (United States)

    Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O

    2009-04-01

    Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.

  19. Leptonic decay of light vector mesons in an independent quark model

    International Nuclear Information System (INIS)

    Barik, N.; Dash, P.C.; Panda, A.R.

    1993-01-01

    Leptonic decay widths of light vector mesons are calculated in a framework based on the independent quark model with a scalar-vector harmonic potential. Assuming a strong correlation to exist between the quark-antiquark momenta inside the meson, so as to make their total momentum identically zero in the center-of-mass frame of the meson, we extract the quark and antiquark momentum distribution amplitudes from the bound quark eigenmode. Using the model parameters determined from earlier studies, we arrive at the leptonic decay widths of (ρ,ω,φ) as (6.26 keV, 0.67 keV, 1.58 keV) which are in very good agreement with the respective experimental data (6.77±0.32 keV, 0.6±0.02 keV, 1.37±0.05 keV)

  20. Analytic model of the radiation-dominated decay of a compact toroid

    International Nuclear Information System (INIS)

    Auerbach, S.P.

    1981-01-01

    The coaxial-gun, compact-torus experiments at LLNL and LASNL are believed to be radiation-dominated, in the sense that most or all of the input energy is lost by impurity radiation. This paper presents a simple analytic model of the radiation-dominated decay of a compact torus, and demonstrates that several striking features of the experiment (finite lifetime, linear current decay, insensitivity of the lifetime to density or stored magnetic energy) may also be explained by the hypothesis that impurity radiation dominates the energy loss. The model incorporates the essential features of the more elaborate 1 1/2-D simulations of Shumaker et al., yet is simple enough to be solved exactly. Based on the analytic results, a simple criterion is given for the maximum tolerable impurity density

  1. Explaining the Higgs decays at the LHC with an extended electroweak model

    International Nuclear Information System (INIS)

    Alves, Alexandre; Ramirez Barreto, E.; Dias, A.G.; Pires, S.C.A. de; Rodrigues da Silva, P.S.; Queiroz, Farinaldo S.

    2013-01-01

    We show that the observed enhancement in the diphoton decays of the recently discovered new boson at the LHC, which we assume to be a Higgs boson, can be naturally explained by a new doublet of charged vector bosons from extended electroweak models with SU(3) C x SU(3) L x U(1) X symmetry. These models are also rather economical in explaining the measured signal strengths, within the current experimental errors, demanding fewer assumptions and less parameters tuning. Our results show a good agreement between the theoretical expected sensitivity to a 126-125 GeV Higgs boson, and the experimental significance observed in the diphoton channel at the 8 TeV LHC. Effects of an invisible decay channel for the Higgs boson are also taken into account, in order to anticipate a possible confirmation of deficits in the branching ratios into ZZ * , WW * , bottom quarks, and tau leptons. (orig.)

  2. Simulation of radon short lived decay daughters' inhalation using the lung compartmental model

    International Nuclear Information System (INIS)

    Tomulescu, Vlad C.

    2002-01-01

    Radon and its short-lived decay daughters are the main source of radiation on natural ways for population. The radon gas, released from soil, water or construction materials is producing by radioactive decay the following solid daughters: Po-218, Bi-214, Pb-214, and Po-214, which can attach to aerosols, and consequently penetrate the organism by inhalation. The human respiratory tract can be approximated by aid of a compartment model that takes into account the different anatomical structures exposed to contamination and irradiation, as well as the respective physiological processes. This model is associated to a mathematical equation system that describes the behavior of the radioactive material inside the body. The results represent the dose equivalent on different organs and tissues, as a function of subject and the activity performed in contaminating environment. (author)

  3. Searching for extensions to the standard model in rare kaon decays

    International Nuclear Information System (INIS)

    Sanders, G.H.

    1989-01-01

    Small effects that are beyond the current standard models of physics are often signatures for new physics, revealing fields and mass scales far removed from contemporary experimental capabilities. This perspective motivates sensitive searches for rare decays of the kaon. The current status of these searches is reviewed, new results are presented, and progress in the near future is discussed. Opportunities for exciting physics research at a hadron facility are noted. 5 refs., 8 figs., 1 tab

  4. Analysis of tooth decay data in Japan using asymmetric statistical models

    OpenAIRE

    Yamamoto, Kouji; Tomizawa,Sadao

    2012-01-01

    Kouji Yamamoto,1 Sadao Tomizawa21Department of Medical Innovation, Osaka University Hospital, Osaka, 2Department of Information Sciences, Faculty of Science and Technology, Tokyo University of Science, Noda City, Chiba, JapanBackground: The aim of the present paper was to develop two new asymmetry probability models to analyze data for tooth decay from 363 women and 349 men aged 18–39 years who visited a dental clinic in Sapporo City, Japan, from 2001 to 2005.Methods: We analyzed th...

  5. Neutrinoless double beta decay in an SU(3)L x U(1)N model

    International Nuclear Information System (INIS)

    Pleitez, V.; Tonasse, M.D.

    1993-01-01

    A model for the electroweak interactions with SU (3) L x U(1) N gauge symmetry is considered. It is shown that, it is the conservation of F = L + B which forbids massive neutrinos and the neutrinoless double beta decay, (β β) On u. Explicit and spontaneous breaking of F imply that the neutrinos have an arbitrary mass and (β β) On u proceeds also with some contributions that do not depend explicitly on the neutrino mass. (author)

  6. Microwave background anisotropy and decaying-particle models for a flat universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Silk, J.

    1985-01-01

    The fine-scale anisotropy of the cosmic microwave background radiation, induced by primordial scale-invariant adiabatic density fluctuations, has been studied in flat cosmological models dominated by relativistic particles from the recent decay of a massive relic-particle species. We find that, if the relic-particle species consists of massive, unstable neutrinos, there is appreciable, and probably excessive, fine-scale anisotropy in the cosmic microwave background

  7. Semileptonic and radiative decays of the Bc meson in the light-front quark model

    International Nuclear Information System (INIS)

    Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2009-01-01

    We investigate the exclusive semileptonic B c →(D,η c ,B,B s )lν l , η b →B c lν l (l=e,μ,τ) decays using the light-front quark model constrained by the variational principle for the QCD-motivated effective Hamiltonian. The form factors f + (q 2 ) and f - (q 2 ) are obtained from the analytic continuation method in the q + =0 frame. While the form factor f + (q 2 ) is free from the zero mode, the form factor f - (q 2 ) is not free from the zero mode in the q + =0 frame. We quantify the zero-mode contributions to f - (q 2 ) for various semileptonic B c decays. Using our effective method to relate the non-wave-function vertex to the light-front valence wave function, we incorporate the zero-mode contribution as a convolution of the zero-mode operator with the initial and final state wave functions. Our results are then compared to the available experimental data and the results from other theoretical approaches. Since the prediction on the magnetic dipole B c *→B c +γ decay turns out to be very sensitive to the mass difference between B c * and B c mesons, the decay width Γ(B c *→B c γ) may help in determining the mass of B c * experimentally. Furthermore, we compare the results from the harmonic oscillator potential and the linear potential and identify the decay processes that are sensitive to the choice of confining potential. From the future experimental data on these sensitive processes, one may obtain more realistic information on the potential between the quark and antiquark in the heavy meson system.

  8. The Matrix exponential, Dynamic Systems and Control

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    The matrix exponential can be found in various connections in analysis and control of dynamic systems. In this short note we are going to list a few examples. The matrix exponential usably pops up in connection to the sampling process, whatever it is in a deterministic or a stochastic setting...... or it is a tool for determining a Gramian matrix. This note is intended to be used in connection to the teaching post the course in Stochastic Adaptive Control (02421) given at Informatics and Mathematical Modelling (IMM), The Technical University of Denmark. This work is a result of a study of the litterature....

  9. Exponential Operators, Dobinski Relations and Summability

    International Nuclear Information System (INIS)

    Blasiak, P; Gawron, A; Horzela, A; Penson, K A; Solomon, A I

    2006-01-01

    We investigate properties of exponential operators preserving the particle number using combinatorial methods developed in order to solve the boson normal ordering problem. In particular, we apply generalized Dobinski relations and methods of multivariate Bell polynomials which enable us to understand the meaning of perturbation-like expansions of exponential operators. Such expansions, obtained as formal power series, are everywhere divergent but the Pade summation method is shown to give results which very well agree with exact solutions got for simplified quantum models of the one mode bosonic systems

  10. Gravitino and scalar {tau}-lepton decays in supersymmetric models with broken R-parity

    Energy Technology Data Exchange (ETDEWEB)

    Hajer, Jan

    2010-06-15

    Mildly broken R-parity is known to provide a solution to the cosmological gravitino problem in supergravity extensions of the Standard Model. In this work we consider new effects occurring in the R-parity breaking Minimal Supersymmetric Standard Model including right-handed neutrino superfields. We calculate the most general vacuum expectation values of neutral scalar fields including left- and right-handed scalar neutrinos. Additionally, we derive the corresponding mass mixing matrices of the scalar sector. We recalculate the neutrino mass generation mechanisms due to right- handed neutrinos as well as by cause of R-parity breaking. Furthermore, we obtain a, so far, unknown formula for the neutrino masses for the case where both mechanisms are effective. We then constrain the couplings to bilinear R-parity violating couplings in order to accommodate R-parity breaking to experimental results. In order to constrain the family structure with a U(1){sub Q} flavor symmetry we furthermore embed the particle content into an SU(5) Grand Unified Theory. In this model we calculate the signal of decaying gravitino dark matter as well as the dominant decay channel of a likely NLSP, the scalar {tau}-lepton. Comparing the gravitino signal with results of the Fermi Large Area Telescope enables us to find a lower bound on the decay length of scalar {tau}-leptons in collider experiments. (orig.)

  11. Gravitino and scalar τ-lepton decays in supersymmetric models with broken R-parity

    International Nuclear Information System (INIS)

    Hajer, Jan

    2010-01-01

    Mildly broken R-parity is known to provide a solution to the cosmological gravitino problem in supergravity extensions of the Standard Model. In this work we consider new effects occurring in the R-parity breaking Minimal Supersymmetric Standard Model including right-handed neutrino superfields. We calculate the most general vacuum expectation values of neutral scalar fields including left- and right-handed scalar neutrinos. Additionally, we derive the corresponding mass mixing matrices of the scalar sector. We recalculate the neutrino mass generation mechanisms due to right- handed neutrinos as well as by cause of R-parity breaking. Furthermore, we obtain a, so far, unknown formula for the neutrino masses for the case where both mechanisms are effective. We then constrain the couplings to bilinear R-parity violating couplings in order to accommodate R-parity breaking to experimental results. In order to constrain the family structure with a U(1) Q flavor symmetry we furthermore embed the particle content into an SU(5) Grand Unified Theory. In this model we calculate the signal of decaying gravitino dark matter as well as the dominant decay channel of a likely NLSP, the scalar τ-lepton. Comparing the gravitino signal with results of the Fermi Large Area Telescope enables us to find a lower bound on the decay length of scalar τ-leptons in collider experiments. (orig.)

  12. Corrections to the neutrinoless double-β-decay operator in the shell model

    Science.gov (United States)

    Engel, Jonathan; Hagen, Gaute

    2009-06-01

    We use diagrammatic perturbation theory to construct an effective shell-model operator for the neutrinoless double-β decay of Se82. The starting point is the same Bonn-C nucleon-nucleon interaction that is used to generate the Hamiltonian for recent shell-model calculations of double-β decay. After first summing high-energy ladder diagrams that account for short-range correlations and then adding diagrams of low order in the G matrix to account for longer-range correlations, we fold the two-body matrix elements of the resulting effective operator with transition densities from the recent shell-model calculation to obtain the overall nuclear matrix element that governs the decay. Although the high-energy ladder diagrams suppress this matrix element at very short distances as expected, they enhance it at distances between one and two fermis, so that their overall effect is small. The corrections due to longer-range physics are large, but cancel one another so that the fully corrected matrix element is comparable to that produced by the bare operator. This cancellation between large and physically distinct low-order terms indicates the importance of a reliable nonperturbative calculation.

  13. Higgs boson production and decay in little Higgs models with T-parity

    International Nuclear Information System (INIS)

    Chen, C.-R.; Tobe, Kazuhiro; Yuan, C.-P.

    2006-01-01

    We study Higgs boson production and decay in a certain class of little Higgs models with T-parity in which some T-parity partners of the Standard Model (SM) fermions gain their masses through Yukawa-type couplings. We find that the Higgs boson production cross section of a 120 GeV Higgs boson at the CERN LHC via gg fusion process at one-loop level could be reduced by about 45%, 35% and 20%, as compared to its SM prediction, for a relatively low new particle mass scale f=600, 700 and 1000 GeV, respectively. On the other hand, the weak boson fusion cross section is close to the SM value. Furthermore, the Higgs boson decay branching ratio into di-photon mode can be enhanced by about 35% in small Higgs mass region in certain case, for the total decay width of Higgs boson in the little Higgs model is always smaller than that in the SM

  14. Modelling the interactions between Pseudomonas putida and Escherichia coli O157:H7 in fish-burgers: use of the lag-exponential model and of a combined interaction index.

    Science.gov (United States)

    Speranza, B; Bevilacqua, A; Mastromatteo, M; Sinigaglia, M; Corbo, M R

    2010-08-01

    The objective of the current study was to examine the interactions between Pseudomonas putida and Escherichia coli O157:H7 in coculture studies on fish-burgers packed in air and under different modified atmospheres (30 : 40 : 30 O(2) : CO(2) : N(2), 5 : 95 O(2) : CO(2) and 50 : 50 O(2) : CO(2)), throughout the storage at 8 degrees C. The lag-exponential model was applied to describe the microbial growth. To give a quantitative measure of the occurring microbial interactions, two simple parameters were developed: the combined interaction index (CII) and the partial interaction index (PII). Under air, the interaction was significant (P exponential growth phase (CII, 1.72), whereas under the modified atmospheres, the interactions were highly significant (P exponential and in the stationary phase (CII ranged from 0.33 to 1.18). PII values for E. coli O157:H7 were lower than those calculated for Ps. putida. The interactions occurring into the system affected both E. coli O157:H7 and pseudomonads subpopulations. The packaging atmosphere resulted in a key element. The article provides some useful information on the interactions occurring between E. coli O157:H7 and Ps. putida on fish-burgers. The proposed index describes successfully the competitive growth of both micro-organisms, giving also a quantitative measure of a qualitative phenomenon.

  15. Exponential rate of convergence in current reservoirs

    OpenAIRE

    De Masi, Anna; Presutti, Errico; Tsagkarogiannis, Dimitrios; Vares, Maria Eulalia

    2015-01-01

    In this paper, we consider a family of interacting particle systems on $[-N,N]$ that arises as a natural model for current reservoirs and Fick's law. We study the exponential rate of convergence to the stationary measure, which we prove to be of the order $N^{-2}$.

  16. Decoding β-decay systematics: A global statistical model for β- half-lives

    International Nuclear Information System (INIS)

    Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.

    2009-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.

  17. Exponentiated Lomax Geometric Distribution: Properties and Applications

    Directory of Open Access Journals (Sweden)

    Amal Soliman Hassan

    2017-09-01

    Full Text Available In this paper, a new four-parameter lifetime distribution, called the exponentiated Lomax geometric (ELG is introduced. The new lifetime distribution contains the Lomax geometric and exponentiated Pareto geometric as new sub-models. Explicit algebraic formulas of probability density function, survival and hazard functions are derived. Various structural properties of the new model are derived including; quantile function, Re'nyi entropy, moments, probability weighted moments, order statistic, Lorenz and Bonferroni curves. The estimation of the model parameters is performed by maximum likelihood method and inference for a large sample is discussed. The flexibility and potentiality of the new model in comparison with some other distributions are shown via an application to a real data set. We hope that the new model will be an adequate model for applications in various studies.

  18. Detecting the Higgs bosons of supersymmetric models in Z0 decays

    International Nuclear Information System (INIS)

    Barnett, R.M.; Gamberini, G.

    1990-01-01

    We propose a method to detect the associated pair production, at the Z 0 resonance, of the light scalar and pseudoscalar Higgs bosons predicted by the minimal supersymmetric model. The method would be useful to study Higgs boson masses in the range 15-50 GeV. We consider the banti b-banti b and banti b-τ + τ - decay combinations of the Higgs pair. We exploit the angular distributions of the decay products in order to suppress the background and accurately determine the mass of the two Higgs particles. The number of events is small, but the signals are very distinct, and a limited study strongly suggests that the backgrounds will not obscure the signals. (orig.)

  19. Effect of atomic spontaneous decay on entanglement in the generalized Jaynes-Cummings model

    International Nuclear Information System (INIS)

    Hessian, H.A.; Obada, A.-S.F.; Mohamed, A.-B.A.

    2010-01-01

    Some aspects of the irreversible dynamics of a generalized Jaynes-Cummings model are addressed. By working in the dressed-state representation, it is possible to split the dynamics of the entanglement and coherence. The exact solution of the master equation in the case of a high-Q cavity with atomic decay is found. Effects of the atomic spontaneous decay on the temporal evolution of partial entropies of the atom or the field and the total entropy as a quantitative measure entanglement are elucidated. The degree of entanglement, through the sum of the negative eigenvalues of the partially transposed density matrix and the negative mutual information has been studied and compared with other measures.

  20. Effect of Friction Model and Tire Maneuvering on Tire-Pavement Contact Stress

    OpenAIRE

    Haichao Zhou; Guolin Wang; Yangmin Ding; Jian Yang; Chen Liang; Jing Fu

    2015-01-01

    This paper aims to simulate the effects of different friction models on tire braking. A truck radial tire (295/80R22.5) was modeled and the model was validated with tire deflection. An exponential decay friction model that considers the effect of sliding velocity on friction coefficients was adopted for analyzing braking performance. The result shows that the exponential decay friction model used for evaluating braking ability meets design requirements of antilock braking system (ABS). The ti...

  1. Exponential stability of delayed fuzzy cellular neural networks with diffusion

    International Nuclear Information System (INIS)

    Huang Tingwen

    2007-01-01

    The exponential stability of delayed fuzzy cellular neural networks (FCNN) with diffusion is investigated. Exponential stability, significant for applications of neural networks, is obtained under conditions that are easily verified by a new approach. Earlier results on the exponential stability of FCNN with time-dependent delay, a special case of the model studied in this paper, are improved without using the time-varying term condition: dτ(t)/dt < μ

  2. Electroweak radiative B-decays as a test of the Standard Model and beyond

    International Nuclear Information System (INIS)

    Tayduganov, A.

    2011-10-01

    Recently the radiative B-decay to strange axial-vector mesons, B → K 1 (1270)γ, was observed with a rather large branching ratio. This process is particularly interesting as the subsequent K 1 -decay into its three-body final state allows us to determine the polarization of the photon, which is mostly left(right)-handed for B-bar(B) in the Standard Model while various new physics models predict additional right(left)-handed components. In this thesis, a new method is proposed to determine the polarization, exploiting the full Dalitz plot distribution, which seems to reduce significantly the statistical errors on the polarization parameter λ γ measurement. This polarization measurement requires, however a detailed knowledge of the K 1 → Kππ strong interaction decays, namely, the complex pattern of various partial wave amplitudes into the several possible quasi-two-body channels as well as their relative phases. A number of experiments have been done to extract all these information while there remain various problems in the previous studies. In this thesis, we investigate the details of these problems. As a theoretical tool, we use the 3 P 0 quark-pair-creation model in order to improve our understanding of strong K 1 -decays. Finally we try to estimate some theoretical uncertainties: in particular, the one coming from the uncertainty on the K 1 mixing angle, and the effect of a possible 'off-set' phase in strong decay S-waves. According to our estimations, the systematic errors are found to be of the order of σ λ γ (th) ≤ 20%. On the other hand, we discuss the sensitivity of the future experiments, namely the SuperB factories and LHCb, to λ γ . Naively estimating the annual signal yields, we found the statistical error of the new method to be σ λ γ (stat) ≤ 10% which turns out to be reduced by a factor 2 with respect to using the simple angular distribution. We also discuss a comparison to the other methods of the polarization measurement using

  3. Results on neutrinoless double beta decay search in GERDA. Background modeling and limit setting

    Energy Technology Data Exchange (ETDEWEB)

    Becerici Schmidt, Neslihan

    2014-07-22

    The search for the neutrinoless double beta decay (0νββ) process is primarily motivated by its potential of revealing the possible Majorana nature of the neutrino, in which the neutrino is identical to its antiparticle. It has also the potential to yield information on the intrinsic properties of neutrinos, if the underlying mechanism is the exchange of a light Majorana neutrino. The Gerda experiment is searching for 0νββ decay of {sup 76}Ge by operating high purity germanium (HPGe) detectors enriched in the isotope {sup 76}Ge (∝ 87%), directly in ultra-pure liquid argon (LAr). The first phase of physics data taking (Phase I) was completed in 2013 and has yielded 21.6 kg.yr of data. A background index of B∼10{sup -2} cts/(keV.kg.yr) at Q{sub ββ}=2039 keV has been achieved. A comprehensive background model of the Phase I energy spectrum is presented as the major topic of this dissertation. Decomposition of the background energy spectrum into the individual contributions from different processes provides many interesting physics results. The specific activity of {sup 39}Ar has been determined. The obtained result, A=(1.15±0.11) Bq/kg, is in good agreement with the values reported in literature. The contribution from {sup 42}K decays in LAr to the background spectrum has yielded a {sup 42}K({sup 42}Ar) specific activity of A=(106.2{sub -19.2}{sup +12.7}) μBq/kg, for which only upper limits exist in literature. The analysis of high energy events induced by α decays in the {sup 226}Ra chain indicated a total {sup 226}Ra activity of (3.0±0.9) μBq and a total initial {sup 210}Po activity of (0.18±0.01) mBq on the p{sup +} surfaces of the enriched semi-coaxial HPGe detectors. The half life of the two-neutrino double beta (2νββ) decay of {sup 76}Ge has been determined as T{sub 1/2}{sup 2ν}=(1.926±0.094).10{sup 21} yr, which is in good agreement with the result that was obtained with lower exposure and has been published by the Gerda collaboration

  4. Results on neutrinoless double beta decay search in GERDA. Background modeling and limit setting

    International Nuclear Information System (INIS)

    Becerici Schmidt, Neslihan

    2014-01-01

    The search for the neutrinoless double beta decay (0νββ) process is primarily motivated by its potential of revealing the possible Majorana nature of the neutrino, in which the neutrino is identical to its antiparticle. It has also the potential to yield information on the intrinsic properties of neutrinos, if the underlying mechanism is the exchange of a light Majorana neutrino. The Gerda experiment is searching for 0νββ decay of 76 Ge by operating high purity germanium (HPGe) detectors enriched in the isotope 76 Ge (∝ 87%), directly in ultra-pure liquid argon (LAr). The first phase of physics data taking (Phase I) was completed in 2013 and has yielded 21.6 kg.yr of data. A background index of B∼10 -2 cts/(keV.kg.yr) at Q ββ =2039 keV has been achieved. A comprehensive background model of the Phase I energy spectrum is presented as the major topic of this dissertation. Decomposition of the background energy spectrum into the individual contributions from different processes provides many interesting physics results. The specific activity of 39 Ar has been determined. The obtained result, A=(1.15±0.11) Bq/kg, is in good agreement with the values reported in literature. The contribution from 42 K decays in LAr to the background spectrum has yielded a 42 K( 42 Ar) specific activity of A=(106.2 -19.2 +12.7 ) μBq/kg, for which only upper limits exist in literature. The analysis of high energy events induced by α decays in the 226 Ra chain indicated a total 226 Ra activity of (3.0±0.9) μBq and a total initial 210 Po activity of (0.18±0.01) mBq on the p + surfaces of the enriched semi-coaxial HPGe detectors. The half life of the two-neutrino double beta (2νββ) decay of 76 Ge has been determined as T 1/2 2ν =(1.926±0.094).10 21 yr, which is in good agreement with the result that was obtained with lower exposure and has been published by the Gerda collaboration. According to the model, the background in Q ββ ±5 keV window is resulting from close

  5. A final state interaction model for K and eta decay into three pions

    International Nuclear Information System (INIS)

    Angus, A.G.

    1973-07-01

    The Khuri-Treiman model is adapted in a relativistic formalism with the electromagnetic mass differences of the pions in the final state taken into account to produce new predictions for the relative decay rates and the slope parameters of the four reactions K→3x and the two reactions eta→3x. The pion-pion interaction is investigated in terms of the N/D method and as well as the normal pure pole approximations for the N functions. The Khuri-Treiman equations are solved for the best solutions from both the pure pole and the mixed pole and cut models. (author)

  6. A calculation of the ZH → γ H decay in the Littlest Higgs Model

    International Nuclear Information System (INIS)

    Aranda, J I; Ramirez-Zavaleta, F; Tututi, E S; Cortés-Maldonado, I

    2016-01-01

    New heavy neutral gauge bosons are predicted in many extensions of the Standard Model, those new bosons are associated with additional gauge symmetries. We present a preliminary calculation of the branching ratio decay for heavy neutral gauge bosons ( Z h ) into γ H in the most popular version of the Little Higgs models. The calculation involves the main contributions at one-loop level induced by fermions, scalars and gauge bosons. Preliminary results show a very suppressed branching ratio of the order of 10 -6 . (paper)

  7. Analysis of Λb→Λμ+μ- Decay in Scalar Leptoquark Model

    Directory of Open Access Journals (Sweden)

    Shuai-Wei Wang

    2016-01-01

    Full Text Available We analyze the baryonic semilepton decay Λb→Λμ+μ- in the scalar leptoquark models with X(3,2,7/6 and X(3,2,1/6 states, respectively. We also discuss the effects of these two NP models on some physical observables. For some measured observables, like the differential decay width, the longitudinal polarization of the dilepton system, the lepton-side forward-backward asymmetry, and the baryon-side forward-backward asymmetry, we find that the prediction values of SM are consistent with the current data in most q2 ranges, where the prediction values of these two NP models can also keep consistent with the current data with 1σ. However, in some q2 ranges, the prediction values of SM are difficult to meet the current data, but the contributions of these two NP models can meet them or keep close to them. For the double-lepton polarization asymmetries, PLT, PTL, PNN, and PTT are sensitive to the scalar leptoquark model X(3,2,7/6 but not to X(3,2,1/6. However, PLN, PNL, PTN, and PNT are not sensitive to these two NP models.

  8. Rare Λb→Λ l+l- and Λb→Λ γ decays in the relativistic quark model

    Science.gov (United States)

    Faustov, R. N.; Galkin, V. O.

    2017-09-01

    Rare Λb→Λ l+l- and Λb→Λ γ decays are investigated in the relativistic quark model based on the quark-diquark picture of baryons. The decay form factors are calculated accounting for all relativistic effects, including relativistic transformations of baryon wave functions from rest to a moving reference frame and the contribution of the intermediate negative-energy states. The momentum-transfer-squared dependence of the form factors is explicitly determined in the whole accessible kinematical range. The calculated decay branching fractions, various forward-backward asymmetries for the rare decay Λb→Λ μ+μ-, are found to be consistent with recent detailed measurements by the LHCb Collaboration. Predictions for the Λb→Λ τ+τ- decay observables are given.

  9. Strange mass corrections to hyperonic semi-leptonic decays in statistical model

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyay, A.; Batra, M. [Thapar University, School of Physics and Material Science, Patiala (India)

    2013-12-15

    We study the spin distribution, weak decay coupling constant ratios for strange baryon octets with SU(3) breaking effects. Baryon is taken as an ensemble of quark-gluon Fock states in the sea with three valence quarks with definite spin, color and flavor quantum numbers. We apply the statistical model to calculate the probabilities of each Fock states, to analyze the impact of SU(3) breaking in the weak decays. The symmetry breaking effects are studied in terms of a parameter ''r '' whose best-fit value is obtained from the experimental data of semi-leptonic weak decay coupling constant ratios. We suggest the dominant contribution from H{sub 1}G{sub 8} (sea with spin one and color octet) where symmetry breaking corrections lead to the deviations in the value of the axial-vector matrix elements ratio F/D from experimental values by 17%. We conclude that symmetry breaking also significantly affects the polarization of quark in strange baryons. (orig.)

  10. Symmetrized exponential oscillator

    Czech Academy of Sciences Publication Activity Database

    Znojil, Miloslav

    2016-01-01

    Roč. 31, č. 34 (2016), č. článku 1650195. ISSN 0217-7323 R&D Projects: GA ČR GA16-22945S Institutional support: RVO:61389005 Keywords : quantum bound states * exactly solvable models * Bessel special funciton transcendental secular equation * numerical precision Subject RIV: BE - Theoretical Physics Impact factor: 1.165, year: 2016

  11. The Inclusion of Arbitrary Load Histories in the Strength Decay Model for Stress Rupture

    Science.gov (United States)

    Reeder, James R.

    2014-01-01

    Stress rupture is a failure mechanism where failures can occur after a period of time, even though the material has seen no increase in load. Carbon/epoxy composite materials have demonstrated the stress rupture failure mechanism. In a previous work, a model was proposed for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures based on strength degradation. However, the original model was limited to constant load periods (holds) at constant load. The model was expanded in this paper to address arbitrary loading histories and specifically the inclusions of ramp loadings up to holds and back down. The broadening of the model allows for failures on loading to be treated as any other failure that may occur during testing instead of having to be treated as a special case. The inclusion of ramps can also influence the length of the "safe period" following proof loading that was previously predicted by the model. No stress rupture failures are predicted in a safe period because time is required for strength to decay from above the proof level to the lower level of loading. Although the model can predict failures during the ramp periods, no closed-form solution for the failure times could be derived. Therefore, two suggested solution techniques were proposed. Finally, the model was used to design an experiment that could detect the difference between the strength decay model and a commonly used model for stress rupture. Although these types of models are necessary to help guide experiments for stress rupture, only experimental evidence will determine how well the model may predict actual material response. If the model can be shown to be accurate, current proof loading requirements may result in predicted safe periods as long as 10(13) years. COPVs design requirements for stress rupture may then be relaxed, allowing more efficient designs, while still maintaining an acceptable level of safety.

  12. A simple proof of exponential decay of subcritical contact processes

    Czech Academy of Sciences Publication Activity Database

    Swart, Jan M.

    2018-01-01

    Roč. 170, 1-2 (2018), s. 1-9 ISSN 0178-8051 R&D Projects: GA ČR(CZ) GA16-15238S Institutional support: RVO:67985556 Keywords : subcritical contact process * sharpness of the phase transition * eigenmeasure Subject RIV: BA - General Mathematics Impact factor: 1.895, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0462694.pdf

  13. From superWIMPs to decaying dark matter. Models, bounds and indirect searches

    Energy Technology Data Exchange (ETDEWEB)

    Weniger, Christoph

    2010-06-15

    Despite lots of observational and theoretical efforts, the particle nature of dark matter remains unknown. Beyond the paradigmatic WIMPs (Weakly Interacting Massive Particles), many theoretically well motivated models exist where dark matter interacts much more weakly than electroweak with Standard Model particles. In this case new phenomena occur, like the decay of dark matter or the interference with the standard cosmology of the early Universe. In this thesis we study some of these aspects of superweakly coupled dark matter in general, and in the special case of hidden U(1){sub X} gauginos that kinetically mix with hypercharge. There, we will assume that the gauge group remains unbroken, similar to the Standard Model U(1){sub em}. We study different kinds of cosmological bounds, including bounds from thermal overproduction, from primordial nucleosynthesis and from structure formation. Furthermore, we study the possible cosmic-ray signatures predicted by this scenario, with emphasis on the electron and positron channel in light of the recent observations by PAMELA and Fermi LAT. Moreover we study the cosmic-ray signatures of decaying dark matter independently of concrete particle-physics models. In particular we analyze in how far the rise in the positron fraction above 10 GeV, as observed by PAMELA, can be explained by dark matter decay. Lastly, we concentrate on related predictions for gamma-ray observations with the Fermi LAT, and propose to use the dipole-like anisotropy of the prompt gamma-ray dark matter signal to distinguish exotic dark matter contributions from the extragalactic gamma-ray background. (orig.)

  14. Rare decays of the Z and the standard model, 4th generation, and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Weiler, T.J.

    1989-01-01

    Several issues in rare decays of the Z are addressed. The rate for flavor-changing Z decay grows as the fourth power of the fermion masses internal to the quantum loop, and so offers a window to the existence of ultraheavy (m > M{sub W}) fermions. In the standard model, with three generations, BR(Z {yields} bs) < 10{sup -7} and BR(Z{yields}tc)<10{sup -13}. With four generations, BR(Z {yields} bb{sub 4}) may be as large as 10{sup -5} if m{sub b4} < M{sub Z}; and similarly for BR(Z {yields} N{sub 4}v), where N{sub 4} is the possibly heavy fourth generation neutrino. In supersymmetric and other two Higgs doublet models, BR(Z {yields} tc) may be as large as 5 {times} 10{sup -6} in the three generation scheme. With minimal supersymmetry, the reaction Z {yields} H{gamma} is guaranteed to go, with a parameter-dependent branching ratio of 10{sup -6 {plus minus} 3}. With mirror fermions or exotic E{sub 6} fermions, the branching ratios for Z {yields} ct (70 GeV), Z {yields} {mu}{tau}, and Z {yields} bb{sub 4} (70 GeV) are typically 10{sup -4}, 10{sup -4}, and 10{sup -3} respectively, clearly measurable at LEP. Depending on unknown quark masses, the Z may mix with vector (b{sub 4}{bar b}{sub 4}) and the W may mix with vector (t{bar b}) or (t{bar s}). CP violating asymmetries in flavor-changing Z decay are immeasurably small in the standard model, but may be large in supersymmetric and other nonstandard models. 28 refs.

  15. From superWIMPs to decaying dark matter. Models, bounds and indirect searches

    International Nuclear Information System (INIS)

    Weniger, Christoph

    2010-06-01

    Despite lots of observational and theoretical efforts, the particle nature of dark matter remains unknown. Beyond the paradigmatic WIMPs (Weakly Interacting Massive Particles), many theoretically well motivated models exist where dark matter interacts much more weakly than electroweak with Standard Model particles. In this case new phenomena occur, like the decay of dark matter or the interference with the standard cosmology of the early Universe. In this thesis we study some of these aspects of superweakly coupled dark matter in general, and in the special case of hidden U(1) X gauginos that kinetically mix with hypercharge. There, we will assume that the gauge group remains unbroken, similar to the Standard Model U(1) em . We study different kinds of cosmological bounds, including bounds from thermal overproduction, from primordial nucleosynthesis and from structure formation. Furthermore, we study the possible cosmic-ray signatures predicted by this scenario, with emphasis on the electron and positron channel in light of the recent observations by PAMELA and Fermi LAT. Moreover we study the cosmic-ray signatures of decaying dark matter independently of concrete particle-physics models. In particular we analyze in how far the rise in the positron fraction above 10 GeV, as observed by PAMELA, can be explained by dark matter decay. Lastly, we concentrate on related predictions for gamma-ray observations with the Fermi LAT, and propose to use the dipole-like anisotropy of the prompt gamma-ray dark matter signal to distinguish exotic dark matter contributions from the extragalactic gamma-ray background. (orig.)

  16. Rare decays of the Z and the standard model, 4th generation, and beyond

    International Nuclear Information System (INIS)

    Weiler, T.J.

    1989-01-01

    Several issues in rare decays of the Z are addressed. The rate for flavor-changing Z decay grows as the fourth power of the fermion masses internal to the quantum loop, and so offers a window to the existence of ultraheavy (m > M W ) fermions. In the standard model, with three generations, BR(Z → bs) -7 and BR(Z→tc) -13 . With four generations, BR(Z → bb 4 ) may be as large as 10 -5 if m b4 Z ; and similarly for BR(Z → N 4 v), where N 4 is the possibly heavy fourth generation neutrino. In supersymmetric and other two Higgs doublet models, BR(Z → tc) may be as large as 5 x 10 -6 in the three generation scheme. With minimal supersymmetry, the reaction Z → Hγ is guaranteed to go, with a parameter-dependent branching ratio of 10 -6 ± 3 . With mirror fermions or exotic E 6 fermions, the branching ratios for Z → ct (70 GeV), Z → μτ, and Z → bb 4 (70 GeV) are typically 10 -4 , 10 -4 , and 10 -3 respectively, clearly measurable at LEP. Depending on unknown quark masses, the Z may mix with vector (b 4 bar b 4 ) and the W may mix with vector (t bar b) or (t bar s). CP violating asymmetries in flavor-changing Z decay are immeasurably small in the standard model, but may be large in supersymmetric and other nonstandard models. 28 refs

  17. Testing the standard model with precision calculations of semileptonic B-decays

    Energy Technology Data Exchange (ETDEWEB)

    Turczyk, Sascha S.

    2011-01-14

    Measurements in the flavour sector are very important to test the Standard Model, since most of the free parameters are related to flavour physics. We are discussing semileptonic B-meson decays, from which an important parameter, vertical stroke V{sub cb} vertical stroke, is extracted. First we discuss higher-order non-perturbative corrections in inclusive semileptonic decays of B mesons. We identify the relevant hadronic matrix elements up to 1/m{sup 5}{sub b} and estimate them using an approximation scheme. Within this approach the e ects on the integrated rate and on kinematic moments are estimated. Similar estimates are presented for B {yields} X{sub s} + {gamma} decays. Furthermore we investigate the role of so-called ''intrinsic-charm'' operators in this decay, which appear first at order 1/m{sup 3}{sub b} in the heavy-quark expansion. We show by explicit calculation that - at scales {mu} {<=} m{sub c} - the contributions from ''intrinsic-charm'' effects can be absorbed into short-distance coefficient functions multiplying, for instance, the Darwin term. Then, the only remnant of ''intrinsic charm'' are logarithms of the form ln(m{sup 2}{sub c}/m{sup 2}{sub b}), which can be resummed by using renormalization-group techniques. As long as the dynamics at the charm quark scale is perturbative, {alpha}{sub s}(m{sub c}) << 1, this implies that no additional non-perturbative matrix elements aside from the Darwin and the spin-orbit term have to be introduced at order 1/m{sup 3}{sub b}. However, ''intrinsic charm'' leads at the next order to terms with inverse powers of the charm mass: 1/m{sup 3}{sub b} x 1/m{sup 2}{sub c}. Parametrically they complement the estimate of the potential impact of 1/m{sup 4}{sub b} contributions, which we will explore. In this context, we draw semiquantitative conclusions for the expected scale of weak annihilation in semileptonic B decays, both for its

  18. Testing the standard model with precision calculations of semileptonic B-decays

    International Nuclear Information System (INIS)

    Turczyk, Sascha S.

    2011-01-01

    Measurements in the flavour sector are very important to test the Standard Model, since most of the free parameters are related to flavour physics. We are discussing semileptonic B-meson decays, from which an important parameter, vertical stroke V cb vertical stroke, is extracted. First we discuss higher-order non-perturbative corrections in inclusive semileptonic decays of B mesons. We identify the relevant hadronic matrix elements up to 1/m 5 b and estimate them using an approximation scheme. Within this approach the e ects on the integrated rate and on kinematic moments are estimated. Similar estimates are presented for B → X s + γ decays. Furthermore we investigate the role of so-called ''intrinsic-charm'' operators in this decay, which appear first at order 1/m 3 b in the heavy-quark expansion. We show by explicit calculation that - at scales μ ≤ m c - the contributions from ''intrinsic-charm'' effects can be absorbed into short-distance coefficient functions multiplying, for instance, the Darwin term. Then, the only remnant of ''intrinsic charm'' are logarithms of the form ln(m 2 c /m 2 b ), which can be resummed by using renormalization-group techniques. As long as the dynamics at the charm quark scale is perturbative, α s (m c ) 3 b . However, ''intrinsic charm'' leads at the next order to terms with inverse powers of the charm mass: 1/m 3 b x 1/m 2 c . Parametrically they complement the estimate of the potential impact of 1/m 4 b contributions, which we will explore. In this context, we draw semiquantitative conclusions for the expected scale of weak annihilation in semileptonic B decays, both for its valence and non-valence components. The last part is dedicated to a complementary measurement of vertical stroke V cb vertical stroke from exclusive B → D (*) l anti ν l . Since this determination shows a slight tension with respect to the inclusive one, we investigate wether a non standard model contribution may distort the extraction. (orig.)

  19. Exponential Stabilization of Underactuated Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Pettersen, K.Y.

    1996-12-31

    Underactuated vehicles are vehicles with fewer independent control actuators than degrees of freedom to be controlled. Such vehicles may be used in inspection of sub-sea cables, inspection and maintenance of offshore oil drilling platforms, and similar. This doctoral thesis discusses feedback stabilization of underactuated vehicles. The main objective has been to further develop methods from stabilization of nonholonomic systems to arrive at methods that are applicable to underactuated vehicles. A nonlinear model including both dynamics and kinematics is used to describe the vehicles, which may be surface vessels, spacecraft or autonomous underwater vehicles (AUVs). It is shown that for a certain class of underactuated vehicles the stabilization problem is not solvable by linear control theory. A new stability result for a class of homogeneous time-varying systems is derived and shown to be an important tool for developing continuous periodic time-varying feedback laws that stabilize underactuated vehicles without involving cancellation of dynamics. For position and orientation control of a surface vessel without side thruster a new continuous periodic feedback law is proposed that does not cancel any dynamics, and that exponentially stabilizes the origin of the underactuated surface vessel. A further issue considered is the stabilization of the attitude of an AUV. Finally, the thesis discusses stabilization of both position and attitude of an underactuated AUV. 55 refs., 28 figs.

  20. On Geodesic Exponential Kernels

    DEFF Research Database (Denmark)

    Feragen, Aasa; Lauze, François; Hauberg, Søren

    2015-01-01

    This extended abstract summarizes work presented at CVPR 2015 [1]. Standard statistics and machine learning tools require input data residing in a Euclidean space. However, many types of data are more faithfully represented in general nonlinear metric spaces or Riemannian manifolds, e.g. shapes, ......, symmetric positive definite matrices, human poses or graphs. The underlying metric space captures domain specific knowledge, e.g. non-linear constraints, which is available a priori. The intrinsic geodesic metric encodes this knowledge, often leading to improved statistical models....

  1. Weak electric and magnetic form factors for semileptonic baryon decays in an independent-quark model

    International Nuclear Information System (INIS)

    Barik, N.; Dash, B.K.; Das, M.

    1985-01-01

    Weak electric and magnetic form factors for semileptonic baryon decays are calculated in a relativistic quark model based on the Dirac equation with the independent-quark confining potential of the form (1+γ 0 )V(r). The values obtained for (g 2 /g 1 ), for various decay modes in a model with V(r) = a'r 2 , are roughly of the same order as those predicted in the MIT bag model. However in a similar model with V(r) = (a/sup nu+1/r/sup ν/+V 0 ), the (g 2 /g 1 ) values agree with the nonrelativistic results of Donoghue and Holstein. Incorporating phenomenologically the effect of nonzero g 2 in the ratio (g 1 /f 1 ), we have estimated the values for (f 2 /f 1 ) for various semileptonic transitions. It is observed that SU(3)-symmetry breaking does not generate significant departures in (f 2 /f 1 ) values from the corresponding Cabibbo values

  2. A model for the transport of radionuclides and their decay products through geological media

    International Nuclear Information System (INIS)

    Burkholder, H.C.; Rosinger, E.L.J.

    1979-09-01

    The one-dimensional trasport of radionuclides and their decay products from an underground nuclear waste isolation site through the surrounding geologic media to a surface environment is modeled. An ambiguity in the application of the previously-reported mathematical solution for this problem has been clarified. The results of applying the solution described here compare favorably with those of the former solution, but the present solution is computatonally more efficient and less subject to numerical errors. This solution is being used by the authors and others to evaluate the sensitivity of potential radoactivity releases into the environment to the characteristics of various nuclear waste isolation systems. (author)

  3. Precise Calculation of Complex Radioactive Decay Chains

    National Research Council Canada - National Science Library

    Harr, Logan J

    2007-01-01

    ...). An application of the exponential moments function is used with a transmutation matrix in the calculation of complex radioactive decay chains to achieve greater precision than can be attained through current methods...

  4. Test of Colour Reconnection Models using Three-Jet Events in Hadronic Z Decays

    CERN Document Server

    Schael, S; Brunelière, R; De Bonis, I; Décamp, D; Goy, C; Jézéquel, S; Lees, J P; Martin, F; Merle, E; Minard, M N; Pietrzyk, B; Trocmé, B; Bravo, S; Casado, M P; Chmeissani, M; Crespo, J M; Fernández, E; Fernández-Bosman, M; Garrido, L; Martínez, M; Pacheco, A; Ruiz, H; Colaleo, A; Creanza, D; De Filippis, N; De Palma, M; Iaselli, G; Maggi, G; Maggi, M; Nuzzo, S; Ranieri, A; Raso, G; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Tricomi, A; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Abbaneo, D; Barklow, T; Buchmüller, O L; Cattaneo, M; Clerbaux, B; Drevermann, H; Forty, R W; Frank, M; Gianotti, F; Hansen, J B; Harvey, J; Hutchcroft, D E; Janot, P; Jost, B; Kado, M; Mato, P; Moutoussi, A; Ranjard, F; Rolandi, Luigi; Schlatter, W D; Teubert, F; Valassi, A; Videau, I; Badaud, F; Dessagne, S; Falvard, A; Fayolle, D; Gay, P; Jousset, J; Michel, B; Monteil, S; Pallin, D; Pascolo, J M; Perret, P; Hansen, J D; Hansen, J R; Hansen, P H; Kraan, A C; Nilsson, B S; Kyriakis, A; Markou, C; Simopoulou, E; Vayaki, A; Zachariadou, K; Blondel, A; Brient, J C; Machefert, F; Rougé, A; Videau, H L; Ciulli, V; Focardi, E; Parrini, G; Antonelli, A; Antonelli, M; Bencivenni, G; Bossi, F; Capon, G; Cerutti, F; Chiarella, V; Laurelli, P; Mannocchi, G; Murtas, G P; Passalacqua, L; Kennedy, J; Lynch, J G; Negus, P; O'Shea, V; Thompson, A S; Wasserbaech, S; Cavanaugh, R J; Dhamotharan, S; Geweniger, C; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Stenzel, H; Tittel, K; Wunsch, M; Beuselinck, R; Cameron, W; Davies, G; Dornan, P J; Girone, M; Marinelli, N; Nowell, J; Rutherford, S A; Sedgbeer, J K; Thompson, J C; White, R; Ghete, V M; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bouhova-Thacker, E; Bowdery, C K; Clarke, D P; Ellis, G; Finch, A J; Foster, F; Hughes, G; Jones, R W L; Pearson, M R; Robertson, N A; Smizanska, M; van der Aa, O; Delaere, C; Leibenguth, G; Lemaître, V; Blumenschein, U; Hölldorfer, F; Jakobs, K; Kayser, F; Müller, A S; Renk, B; Sander, H G; Schmeling, S; Wachsmuth, H W; Zeitnitz, C; Ziegler, T; Bonissent, A; Coyle, P; Curtil, C; Ealet, A; Fouchez, D; Payre, P; Tilquin, A; Ragusa, F; David, A; Dietl, H; Ganis, G; Hüttmann, K; Lütjens, G; Männer, W; Moser, H G; Settles, R; Villegas, M; Wolf, G; Boucrot, J; Callot, O; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacholkowska, A; Serin, L; Veillet, J J; Azzurri, P; Bagliesi, G; Boccali, T; Foà, L; Giammanco, A; Giassi, A; Ligabue, F; Messineo, A; Palla, F; Sanguinetti, G; Sciabà, A; Sguazzoni, G; Spagnolo, P; Tenchini, R; Venturi, A; Verdini, P G; Awunor, O; Blair, G A; Cowan, G; García-Bellido, A; Green, M G; Medcalf, T; Misiejuk, A; Strong, J A; Teixeira-Dias, P; Clifft, R W; Edgecock, T R; Norton, P R; Tomalin, I R; Ward, J J; Bloch-Devaux, B; Boumediene, D E; Colas, P; Fabbro, B; Lançon, E; Lemaire, M C; Locci, E; Pérez, P; Rander, J; Tuchming, B; Vallage, B; Litke, A M; Taylor, G; Booth, C N; Cartwright, S; Combley, F; Hodgson, P N; Lehto, M H; Thompson, L F; Böhrer, A; Brandt, S; Grupen, C; Hess, J; Ngac, A; Prange, G; Borean, C; Giannini, G; He, H; Pütz, J; Rothberg, J E; Armstrong, S R; Berkelman, K; Cranmer, K; Ferguson, D P S; Gao, Y; González, S; Hayes, O J; Hu, H; Jin, S; Kile, J; McNamara, P A; Nielsen, J; Pan, Y B; Von Wimmersperg-Töller, J H; Wiedenmann, W; Wu, J; Wu, S L; Wu, X; Zobernig, G; Dissertori, G

    2006-01-01

    Hadronic Z decays into three jets are used to test QCD models of colour reconnection (CR). A sensitive quantity is the rate of gluon jets with a gap in the particle rapidity distribution and zero jet charge. Gluon jets are identified by either energy-ordering or by tagging two b-jets. The rates predicted by two string-based tunable CR models, one implemented in JETSET (the GAL model), the other in ARIADNE, are too high and disfavoured by the data, whereas the rates from the corresponding non-CR standard versions of these generators are too low. The data can be described by the GAL model assuming a small value for the R_0 parameter in the range 0.01-0.02.

  5. Test of colour reconnection models using three-jet events in hadronic Z decays

    International Nuclear Information System (INIS)

    Schael, S.; Barate, R.; Bruneliere, R.; De Bonis, I.; Decamp, D.; Goy, C.; Jezequel, S.; Lees, J.-P.; Martin, F.; Merle, E.; Minard, M.-N.; Pietrzyk, B.; Trocme, B.; Bravo, S.; Casado, M.P.; Chmeissani, M.; Crespo, J.M.; Fernandez, E.; Fernandez-Bosman, M.; Garrido, L.; Martinez, M.; Pacheco, A.; Ruiz, H.; Colaleo, A.; Creanza, D.; De Filippis, N.; de Palma, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Nuzzo, S.; Ranieri, A.; Raso, G.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Tricomi, A.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Abbaneo, D.; Barklow, T.; Buchmueller, O.; Cattaneo, M.; Clerbaux, B.; Drevermann, H.; Forty, R.W.; Frank, M.; Gianotti, F.; Hansen, J.B.; Harvey, J.; Hutchcroft, D.E.; Janot, P.; Jost, B.; Kado, M.; Mato, P.; Moutoussi, A.; Ranjard, F.; Rolandi, L.; Schlatter, D.; Teubert, F.; Valassi, A.; Videau, I.; Badaud, F.; Dessagne, S.; Falvard, A.; Fayolle, D.; Gay, P.; Jousset, J.; Michel, B.; Monteil, S.; Pallin, D.; Pascolo, J.M.; Perret, P.; Hansen, J.D.; Hansen, J.R.; Hansen, P.H.; Kraan, A.C.; Nilsson, B.S.; Kyriakis, A.; Markou, C.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Blondel, A.; Brient, J.-C.; Machefert, F.; Rouge, A.; Videau, H.; Ciulli, V.; Focardi, E.; Parrini, G.; Antonelli, A.; Antonelli, M.; Bencivenni, G.; Bossi, F.; Capon, G.; Cerutti, F.; Chiarella, V.; Laurelli, P.; Mannocchi, G.; Murtas, G.P.; Passalacqua, L.; Kennedy, J.; Lynch, J.G.; Negus, P.; O'Shea, V.; Thompson, A.S.; Wasserbaech, S.; Cavanaugh, R.; Dhamotharan, S.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E.E.; Putzer, A.; Stenzel, H.; Tittel, K.; Wunsch, M.; Beuselinck, R.; Cameron, W.; Davies, G.; Dornan, P.J.; Girone, M.; Marinelli, N.; Nowell, J.; Rutherford, S.A.; Sedgbeer, J.K.; Thompson, J.C.; White, R.; Ghete, V.M.; Girtler, P.; Jussel, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bouhova-Thacker, E.; Bowdery, C.K.; Clarke, D.P.; Ellis, G.; Finch, A.J.; Foster, F.; Hughes, G.; Jones, R.W.L.; Pearson, M.R.; Robertson, N.A.; Smizanska, M.; van der Aa, O.; Delaere, C.; Leibenguth, G.; Lemaitre, V.; Blumenschein, U.; Hoelldorfer, F.; Jakobs, K.; Kayser, F.; Mueller, A.-S.; Renk, B.; Sander, H.-G.; Schmeling, S.; Wachsmuth, H.; Zeitnitz, C.; Ziegler, T.; Bonissent, A.; Coyle, P.; Curtil, C.; Ealet, A.; Fouchez, D.; Payre, P.; Tilquin, A.; Ragusa, F.; David, A.; Dietl, H.; Ganis, G.; Huettmann, K.; Luetjens, G.; Maenner, W.; Moser, H.-G.; Settles, R.; Villegas, M.; Wolf, G.; Boucrot, J.; Callot, O.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, P.; Jacholkowska, A.; Serin, L.; Veillet, J.-J.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Foa, L.; Giammanco, A.; Giassi, A.; Ligabue, F.; Messineo, A.; Palla, F.; Sanguinetti, G.; Sciaba, A.; Sguazzoni, G.; Spagnolo, P.; Tenchini, R.; Venturi, A.; Verdini, P.G.; Awunor, O.; Blair, G.A.; Cowan, G.; Garcia-Bellido, A.; Green, M.G.; Medcalf, T.; Misiejuk, A.; Strong, J.A.; Teixeira-Dias, P.; Clifft, R.W.; Edgecock, T.R.; Norton, P.R.; Tomalin, I.R.; Ward, J.J.; Bloch-Devaux, B.; Boumediene, D.; Colas, P.; Fabbro, B.; Lancon, E.; Lemaire, M.-C.; Locci, E.; Perez, P.; Rander, J.; Tuchming, B.; Vallage, B.; Litke, A.M.; Taylor, G.; Booth, C.N.; Cartwright, S.; Combley, F.; Hodgson, P.N.; Lehto, M.; Thompson, L.F.; Boehrer, A.; Brandt, S.; Grupen, C.; Hess, J.; Ngac, A.; Prange, G.; Borean, C.; Giannini, G.; He, H.; Putz, J.; Rothberg, J.; Armstrong, S.R.; Berkelman, K.; Cranmer, K.; Ferguson, D.P.S.; Gao, Y.; Gonzalez, S.; Hayes, O.J.; Hu, H.; Jin, S.; Kile, J.; McNamara III, P.A.; Nielsen, J.; Pan, Y.B.; von Wimmersperg-Toeller, J.H.; Wiedenmann, W.; Wu, J.; Wu, S.L.; Wu, X.; Zobernig, G.

    2006-01-01

    Hadronic Z decays into three jets are used to test QCD models of colour reconnection (CR). A sensitive quantity is the rate of gluon jets with a gap in the particle rapidity distribution and zero jet charge. Gluon jets are identified by either energy-ordering or by tagging two b-jets. The rates predicted by two string-based tunable CR models, one implemented in JETSET (the GAL model), the other in ARIADNE, are too high and disfavoured by the data, whereas the rates from the corresponding non-CR standard versions of these generators are too low. The data can be described by the GAL model assuming a small value for the R 0 parameter in the range 0.01-0.02. (orig.)

  6. A Search for Beyond Standard Model Light Bosons Decaying into Muon Pairs

    CERN Document Server

    CMS Collaboration

    2016-01-01

    A dataset corresponding to $2.8~\\mathrm{fb}^{-1}$ of proton-proton collisions at $\\sqrt{s} = 13~\\mathrm{TeV}$ was recorded by the CMS experiment at the CERN LHC. These data are used to search for new light bosons with a mass in the range $0.25-8.5~\\mathrm{GeV}/c^2$ decaying into muon pairs. No excess is observed in the data, and a model-independent upper limit on the product of the cross section, branching fraction and acceptance is derived. The results are interpreted in the context of two benchmark models, namely, the next-to-minimal supersymmetric standard model, and dark SUSY models including those predicting a non-negligible light boson lifetime.

  7. Generalized approach to non-exponential relaxation

    Indian Academy of Sciences (India)

    Non-exponential relaxation is a universal feature of systems as diverse as glasses, spin ... which changes from a simple exponential to a stretched exponential and a power law by increasing the constraints in the system. ... Current Issue

  8. Rare B-meson decays in SU(2)LxSU(2)RxU(1) model

    International Nuclear Information System (INIS)

    Asatryan, H.M.; Ioannissian, A.N.

    1989-01-01

    Rare B-meson decays are investigated in the left-right synmmetric models. The scalar particle contribution to the amplitude of the b → s γ decay is calculated. It is shown that this contribution can be essential even for the scalar particles masses of about several TeV. The effects due to the left-right symmetry and scalar particles can be detected by measuring the photon polarization in the decay B → K * γ. 9 refs.; 1 fig.; 1 tab

  9. Review of "Going Exponential: Growing the Charter School Sector's Best"

    Science.gov (United States)

    Garcia, David

    2011-01-01

    This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…

  10. Soccer in Indiana and models for non-leptonic decays of heavy flavours

    International Nuclear Information System (INIS)

    Bigi, I.I.

    1989-01-01

    Various descriptions of non-leptonic charm decays are reviewed and their relative strengths and weaknesses are listed. I conclude that it is mainly (though no necessarily solely) a destructive interference in nonleptonic D + decays that shapes the decays of charm mesons. Some more subtle features in these decays are discussed in a preview of future research before I address the presently confused situation in D s decays. Finally I give a brief theoretical discussion of inclusive and exclusive non-leptonic decays of beauty mesons

  11. Soccer in Indiana and models for non-leptonic decays of heavy flavors

    International Nuclear Information System (INIS)

    Bigi, I.I.

    1989-01-01

    Various descriptions of non-leptonic charm decays are reviewed and their relative strengths and weaknesses are listed. The author concludes that it is mainly (though not necessarily solely) a destructive interference in nonleptonic D + decays that shapes the decays of charm mesons. Some more subtle features in these decays are discussed in a preview of future research before he addresses the presently confused situation in D s decays. Finally, he gives a brief theoretical discussion of inclusive and exclusive non-leptonic decays of beauty mesons. 13 refs., 1 tab

  12. EVENT GENERATION OF STANDARD MODEL HIGGS DECAY TO DIMUON PAIRS USING PYTHIA SOFTWARE

    CERN Document Server

    Yusof, Adib

    2015-01-01

    My project for CERN Summer Student Programme 2015 is on Event Generation of Standard Model Higgs Decay to Dimuon Pairs using Pythia Software. Briefly, Pythia or specifically, Pythia 8.1 is a program for the generation of high-energy Physics events that is able to describe the collisions at any given energies between elementary particles such as Electron, Positron, Proton as well as anti-Proton. It contains theory and models for a number of Physics aspects, including hard and soft interactions, parton distributions, initial-state and final-state parton showers, multiparton interactions, fragmentation and decay. All programming code is to be written in C++ language for this version (the previous version uses FORTRAN) and can be linked to ROOT software for displaying output in form of histogram. For my project, I need to generate events for standard model Higgs Boson into Muon and anti-Muon pairs (H→μ+ μ) to study the expected significance value for this particular process at centre-of-mass energy of 13 TeV...

  13. Application of the Laplace transform method for computational modelling of radioactive decay series

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Deise L.; Damasceno, Ralf M.; Barros, Ricardo C. [Univ. do Estado do Rio de Janeiro (IME/UERJ) (Brazil). Programa de Pos-graduacao em Ciencias Computacionais

    2012-03-15

    It is well known that when spent fuel is removed from the core, it is still composed of considerable amount of radioactive elements with significant half-lives. Most actinides, in particular plutonium, fall into this category, and have to be safely disposed of. One solution is to store the long-lived spent fuel as it is, by encasing and burying it deep underground in a stable geological formation. This implies estimating the transmutation of these radioactive elements with time. Therefore, we describe in this paper the application of the Laplace transform technique in matrix formulation to analytically solve initial value problems that mathematically model radioactive decay series. Given the initial amount of each type of radioactive isotopes in the decay series, the computer code generates the amount at a given time of interest, or may plot a graph of the evolution in time of the amount of each type of isotopes in the series. This computer code, that we refer to as the LTRad{sub L} code, where L is the number of types of isotopes belonging to the series, was developed using the Scilab free platform for numerical computation and can model one segment or the entire chain of any of the three radioactive series existing on Earth today. Numerical results are given to typical model problems to illustrate the computer code efficiency and accuracy. (orig.)

  14. Application of the Laplace transform method for computational modelling of radioactive decay series

    International Nuclear Information System (INIS)

    Oliveira, Deise L.; Damasceno, Ralf M.; Barros, Ricardo C.

    2012-01-01

    It is well known that when spent fuel is removed from the core, it is still composed of considerable amount of radioactive elements with significant half-lives. Most actinides, in particular plutonium, fall into this category, and have to be safely disposed of. One solution is to store the long-lived spent fuel as it is, by encasing and burying it deep underground in a stable geological formation. This implies estimating the transmutation of these radioactive elements with time. Therefore, we describe in this paper the application of the Laplace transform technique in matrix formulation to analytically solve initial value problems that mathematically model radioactive decay series. Given the initial amount of each type of radioactive isotopes in the decay series, the computer code generates the amount at a given time of interest, or may plot a graph of the evolution in time of the amount of each type of isotopes in the series. This computer code, that we refer to as the LTRad L code, where L is the number of types of isotopes belonging to the series, was developed using the Scilab free platform for numerical computation and can model one segment or the entire chain of any of the three radioactive series existing on Earth today. Numerical results are given to typical model problems to illustrate the computer code efficiency and accuracy. (orig.)

  15. Decay dynamics of blue-green luminescence in meso-porous MCM-41 nanotubes

    International Nuclear Information System (INIS)

    Lee, Y.C.; Liu, Y.L.; Wang, C.K.; Shen, J.L.; Cheng, P.W.; Cheng, C.F.; Ko, C.-H.; Lin, T.Y.

    2005-01-01

    Time-resolved photoluminescence (PL) was performed to investigate the decay of blue-green luminescence in MCM-41 nanotubes. The PL decay exhibits a clear nonexponential profile, which can be fitted by a stretched exponential function. In the temperature range from 50 to 300 K the photogenerated carriers become thermally activated with a characteristic energy of 29 meV, which is an indication of the phonon-assisted nonradiative process. The temperature dependence of the lifetime of PL decay has been explained using a model based on the radiative recombination of localized carriers and the phonon-assisted nonradiative recombination

  16. Exponential x-ray transform

    International Nuclear Information System (INIS)

    Hazou, I.A.

    1986-01-01

    In emission computed tomography one wants to determine the location and intensity of radiation emitted by sources in the presence of an attenuating medium. If the attenuation is known everywhere and equals a constant α in a convex neighborhood of the support of f, then the problem reduces to that of inverting the exponential x-ray transform P/sub α/. The exponential x-ray transform P/sub μ/ with the attenuation μ variable, is of interest mathematically. For the exponential x-ray transform in two dimensions, it is shown that for a large class of approximate δ functions E, convolution kernels K exist for use in the convolution backprojection algorithm. For the case where the attenuation is constant, exact formulas are derived for calculating the convolution kernels from radial point spread functions. From these an exact inversion formula for the constantly attenuated transform is obtained

  17. Search for the decay stau --> tau + gravitino in the framework of the Minimal Gauge Mediated SUSY Breaking models

    CERN Document Server

    Cavallo, F R

    1997-01-01

    A search for these decays was carried out in the context of Gauge Mediated SUSY Breaking models, using the data collected by DELPHI in 1995 and 1996 at the center of mass energies of 133, 161 and 172 GeV. No evidence of these processes was found for a decay length ranging from ~ 1mm to ~ 20cm and limits were derived on the gravitino and scalar tau masses.

  18. $\\beta$-asymmetry measurements in nuclear $\\beta$-decay as a probe for non-standard model physics

    CERN Multimedia

    Roccia, S

    2002-01-01

    We propose to perform a series of measurements of the $\\beta$-asymmetry parameter in the decay of selected nuclei, in order to investigate the presence of possible time reversal invariant tensor contributions to the weak interaction. The measurements have the potential to improve by a factor of about four on the present limits for such non-standard model contributions in nuclear $\\beta$-decay.

  19. Simplifying the Mathematical Treatment of Radioactive Decay

    Science.gov (United States)

    Auty, Geoff

    2011-01-01

    Derivation of the law of radioactive decay is considered without prior knowledge of calculus or the exponential series. Calculus notation and exponential functions are used because ultimately they cannot be avoided, but they are introduced in a simple way and explained as needed. (Contains 10 figures, 1 box, and 1 table.)

  20. Unwrapped phase inversion with an exponential damping

    KAUST Repository

    Choi, Yun Seok

    2015-07-28

    Full-waveform inversion (FWI) suffers from the phase wrapping (cycle skipping) problem when the frequency of data is not low enough. Unless we obtain a good initial velocity model, the phase wrapping problem in FWI causes a result corresponding to a local minimum, usually far away from the true solution, especially at depth. Thus, we have developed an inversion algorithm based on a space-domain unwrapped phase, and we also used exponential damping to mitigate the nonlinearity associated with the reflections. We construct the 2D phase residual map, which usually contains the wrapping discontinuities, especially if the model is complex and the frequency is high. We then unwrap the phase map and remove these cycle-based jumps. However, if the phase map has several residues, the unwrapping process becomes very complicated. We apply a strong exponential damping to the wavefield to eliminate much of the residues in the phase map, thus making the unwrapping process simple. We finally invert the unwrapped phases using the back-propagation algorithm to calculate the gradient. We progressively reduce the damping factor to obtain a high-resolution image. Numerical examples determined that the unwrapped phase inversion with a strong exponential damping generated convergent long-wavelength updates without low-frequency information. This model can be used as a good starting model for a subsequent inversion with a reduced damping, eventually leading to conventional waveform inversion.

  1. Experimental investigations on scaled models for the SNR-2 decay heat removal by natural convection

    International Nuclear Information System (INIS)

    Hoffmann, H.; Weinberg, D.; Tschoeke, H.; Frey, H.H.; Pertmer, G.

    1986-01-01

    Scaled water models are used to prove the mode of function of the decay heat removal by natural convection for the SNR-2. The 2D and 3D models were designed to reach the characteristic numbers (Richardson, Peclet) of the reactor. In the experiments on 2D models the position of the immersed cooler (IC) and the power were varied. Temperature fields and velocities were measured. The IC installed as a separate component in the hot plenum resulted in a very complex flow behavior and low temperatures. Integrating the IC in the IHX showed a very simple circulating flow and high temperatures within the hot plenum. With increasing power only slightly rising temperature differences within the core and IC were detected. Recalculations using the COMMIX 1B code gave qualitatively satisfying results. (author)

  2. Dalitz plot analysis of the $D^+ \\rightarrow K^- K^+ K^+$ decay with the isobar model

    CERN Document Server

    The LHCb Collaboration

    2016-01-01

    This note presents a study of the $K^-K^+$ S-wave amplitude in doubly Cabibbo-suppressed ${D^+ \\rightarrow K^- K^+ K^+}$ decays performed using $2 \\text{fb}^{-1}$ of data collected by the LHCb detector in $pp$ collisions at $8~\\text{TeV}$ centre-of-mass energy. The Dalitz plot is studied under the assumption of the isobar model for resonance scattering. Models with combinations of resonant states are tested. Fits of comparable quality are obtained for different $K^-K^+$ S-wave parameterizations. The results obtained indicate that a variation of the S-wave phase at both ends of $K^-K^+$ spectrum is needed to describe the data. Further studies beyond the näive isobar model are foreseen to understand the $K^-K^+$ S-wave.

  3. Model independent measurement of the leptonic kaon decay $K^\\pm \\to \\mu^\\pm \

    CERN Document Server

    Bizzeti, Andrea

    2018-01-01

    Two recent results on rare kaon decays are presented, based on $\\sim 2 \\times 10^{11} K^{\\pm}$ decays recorded by the NA48/2 experiment at CERN SPS in 2003 and 2004. The branching ratio of the rare leptonic decay $K^{\\pm} \\to \\mu^{\\pm} \

  4. Model independent measurement of the leptonic kaon decay $K^\\pm \\to \\mu^\\pm \

    CERN Document Server

    Bizzeti, Andrea

    2017-01-01

    Two recent results on rare kaon decays are presented, based on $\\sim 2 \\times 10^{11} K^{\\pm}$ decays recorded by the NA48/2 experiment at CERN SPS in 2003 and 2004. The branching ratio of the rare leptonic decay $K^{\\pm} \\to \\mu^{\\pm} \

  5. Short term memory decays and high presentation rates hurry this decay: The Murdock free recall experiments interpreted in the Tagging/Retagging model

    OpenAIRE

    Tarnow, Dr. Eugen

    2009-01-01

    I show that the curious free recall data of Murdock (1962) can be explained by the Tagging/Retagging model of short term memory (Tarnow, 2009 and 2008) in which a short term memory item is a tagged long term memory item. The tagging (linear in time) corresponds to the synaptic process of exocytosis and the loss of tagging (logarithmic in time) corresponds to synaptic endocytosis. The Murdock recent item recall probabilities follow a logarithmic decay with time of recall. The slope of the d...

  6. Higgs boson decays and production via gluon fusion at LHC in littlest Higgs models with T parity

    International Nuclear Information System (INIS)

    Wang Lei; Yang Jinmin

    2009-01-01

    We study the Higgs boson decays and production via gluon fusion at the LHC as a probe of two typical littlest Higgs models which introduce a top quark partner with different (even and odd) T parity to cancel the Higgs mass quadratic divergence contributed by the top quark. For each model, we consider two different choices for the down-type quark Yukawa couplings. We first examine the branching ratios of the Higgs boson decays and then study the production via gluon fusion followed by the decay into two photons or two weak gauge bosons. We find that the predictions can be quite different for different models or different choices of down-type quark Yukawa couplings, and all these predictions can sizably deviate from the standard model predictions. So the Higgs boson processes at the LHC can be a sensitive probe for these littlest Higgs models.

  7. Hyperbolic Cosine–Exponentiated Exponential Lifetime Distribution and its Application in Reliability

    Directory of Open Access Journals (Sweden)

    Omid Kharazmi

    2017-02-01

    Full Text Available Recently, Kharazmi and Saadatinik (2016 introduced a new family of lifetime distributions called hyperbolic cosine – F (HCF distribution. In the present paper, it is focused on a special case of HCF family with exponentiated exponential distribution as a baseline distribution (HCEE. Various properties of the proposed distribution including explicit expressions for the moments, quantiles, mode, moment generating function, failure rate function, mean residual lifetime, order statistics and expression of the entropy are derived. Estimating parameters of HCEE distribution are obtained by eight estimation methods: maximum likelihood, Bayesian, maximum product of spacings, parametric bootstrap, non-parametric bootstrap, percentile, least-squares and weighted least-squares. A simulation study is conducted to examine the bias, mean square error of the maximum likelihood estimators. Finally, one real data set has been analyzed for illustrative purposes and it is observed that the proposed model fits better than Weibull, gamma and generalized exponential distributions.

  8. Neutrinoless double-beta decay in left-right symmetric models

    International Nuclear Information System (INIS)

    Picciotto, C.E.; Zahir, M.S.

    1982-06-01

    Neutrinoless double-beta decay is calculated via doubly charged Higgs, which occur naturally in left-right symmetric models. We find that the comparison with known half-lives yields values of phenomenological parameters which are compatible with earlier analyses of neutral current data. In particular, we obtain a right-handed gauge-boson mass lower bound of the order of 240 GeV. Using this result and expressions for neutrino masses derived in a parity non-conserving left-right symmetric model, we obtain msub(νsub(e)) < 1.5 eV, msub(νsub(μ)) < 0.05 MeV and msub(νsub(tau)) < 18 MeV

  9. Representation of the radiative strength functions in the practical model of cascade gamma decay

    International Nuclear Information System (INIS)

    Vu, D.C.; Sukhovoj, A.M.; Mitsyna, L.V.; Zeinalov, Sh.; Jovancevic, N.; Knezevic, D.; Krmar, M.; Dragic, A.

    2016-01-01

    The developed in Dubna practical model of the cascade gamma decay of neutron resonance allows one, from the fitted intensities of the two-step cascades, to obtain parameters both of level density and of partial widths of emission of nuclear reaction products. In the presented variant of the model a part of phenomenological representations is minimized. Analysis of new results confirms the previous finding that dynamics of interaction between Fermi- and Bose-nuclear states depends on the form of the nucleus. It also follows from the ratios of densities of vibrational and quasi-particle levels that this interaction exists at least up to the binding neutron energy and probably differs for nuclei with varied parities of nucleons. [ru

  10. Decays of open charmed mesons in the extended Linear Sigma Model

    Directory of Open Access Journals (Sweden)

    Eshraim Walaa I.

    2014-01-01

    Full Text Available We enlarge the so-called extended linear Sigma model (eLSM by including the charm quark according to the global U(4r × U(4l chiral symmetry. In the eLSM, besides scalar and pseudoscalar mesons, also vector and axial-vector mesons are present. Almost all the parameters of the model were fixed in a previous study of mesons below 2 GeV. In the extension to the four-flavor case, only three additional parameters (all of them related to the bare mass of the charm quark appear.We compute the (OZI dominant strong decays of open charmed mesons. The results are compatible with the experimental data, although the theoretical uncertainties are still large.

  11. B→K1l+l- decays in a family non-universal Z' model

    International Nuclear Information System (INIS)

    Li, Ying; Hua, Juan; Yang, Kwei-Chou

    2011-01-01

    The implications of the family non-universal Z' model in the B→K 1 (1270,1400)l + l - (l=e,μ,τ) decays are explored, where the mass eigenstates K 1 (1270, 1400) are the mixtures of 1 P 1 and 3 P 1 states with the mixing angle θ. In this work, considering the Z' boson and setting the mixing angle θ=(-34±13) , we analyze the branching ratio, the dilepton invariant mass spectrum, the normalized forward-backward asymmetry and lepton polarization asymmetries of each decay mode. We find that all observables of B→K 1 (1270)μ + μ - are sensitive to the Z' contribution. Moreover, the observables of B→K 1 (1400)μ + μ - have a relatively strong θ-dependence; thus, the Z' contribution will be buried by the uncertainty of the mixing angle θ. Furthermore, the zero crossing position in the FBA spectrum of B→K 1 (1270)μ + μ - at low dilepton mass will move to the positive direction with Z' contribution. For the tau modes, the effects of Z' are not remarkable due to the small phase space. These results could be tested in the running LHC-b experiment and super-B factory. (orig.)

  12. Shell model with several particles in the continuum: application to the two-proton decay

    International Nuclear Information System (INIS)

    Rotureau, J.

    2005-02-01

    The recent experimental results concerning nuclei at the limit of stability close to the drip-lines and in particular the two-proton emitters require a development of new methodologies to reliably calculate and understand properties of those exotic physical systems. In this work we have extended the Shell Model Embedded in the Continuum (SMEC) in order to describe the coupling with two particles in the scattering continuum. We have obtained a microscopic description of the two-proton emission that takes into account the antisymmetrization of the total wavefunction, the configuration mixing and the three-body asymptotics. We have studied the decay of the 1 2 - state in 18 Ne in two limiting cases: (i) a sequential emission of two protons through the correlated continuum of 17 F and (ii) emission of 2 He cluster that disintegrates because of the final state interaction (diproton emission). Independently of the choice of the effective interaction we have observed that the two-proton emission of the 1 2 - in 18 Ne is mainly a sequential process; the ratio between the widths of the diproton emission and the sequential decay does not exceed 8% in any case. (author)

  13. Radiative leptonic B{sub c} decay in the relativistic independent quark model

    Energy Technology Data Exchange (ETDEWEB)

    Barik, N [Department of Physics, Utkal University, Bhubaneswar-751004 (India); Naimuddin, Sk; Dash, P C [Department of Physics, Prananath Autonomous College, Khurda-752057 (India); Kar, Susmita [Department of Physics, North Orissa University, Baripada-757003 (India)

    2008-12-01

    The radiative leptonic decay B{sub c}{sup -}{yields}{mu}{sup -}{nu}{sub {mu}}{gamma} is analyzed in its leading order in a relativistic independent quark model based on a confining potential in an equally mixed scalar-vector harmonic form. The branching ratio for this decay in the vanishing lepton mass limit is obtained as Br(B{sub c}{yields}{mu}{nu}{sub {mu}}{gamma})=6.83x10{sup -5}, which includes the contributions of the internal bremsstrahlung and structure-dependent diagrams at the level of the quark constituents. The contributions of the bremsstrahlung and the structure-dependent diagrams, as well as their additive interference parts, are compared and found to be of the same order of magnitude. Finally, the predicted photon energy spectrum is observed here to be almost symmetrical about the peak value of the photon energy at E-tilde{sub {gamma}}{approx_equal}(M{sub B{sub c}}/4), which may be quite accessible experimentally at LHC in near future.

  14. Development of the α-decay theory of spherical nuclei by means of the shell model

    International Nuclear Information System (INIS)

    Holan, S.

    1978-01-01

    The new results achieved within the α-decay theory of spherical nuclei with a (2)-(5) integral formula, unaffected by arbitrary parameters, taking into account the finite shape of the α particle and using a basis of Woods-Saxon uniparticle functions to describe initial and final nuclei, may be summarized as follows: Through α-width calculations performed for many spherical nuclei it has been proved that experimental classifying of α-transition into favoured and unfavoured transitions as well as the hyperfine structure of the transitions can be theoretically explained if considered the nucleon-nucleon correlations in the description of initial and final nuclei; The absolute values of the theoretical α-widths obtained are about 10 2 times smaller compared to the experimental ones. This might be due to an oversimplified approximation of the α-particle-daughter nucleus interaction potential or either to an inaccuracy of the model functions used in describing nucleus decay in the surface area. (author)

  15. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  16. The exponentiated generalized Pareto distribution | Adeyemi | Ife ...

    African Journals Online (AJOL)

    Recently Gupta et al. (1998) introduced the exponentiated exponential distribution as a generalization of the standard exponential distribution. In this paper, we introduce a three-parameter generalized Pareto distribution, the exponentiated generalized Pareto distribution (EGP). We present a comprehensive treatment of the ...

  17. Search for the standard model Higgs boson in the dimuon decay channel with the ATLAS detector

    International Nuclear Information System (INIS)

    Rudolph, Joerg Christian

    2014-01-01

    The search for the Standard Model Higgs boson was one of the key motivations to build the world's largest particle physics experiment to date, the Large Hadron Collider (LHC). This thesis is equally driven by this search, and it investigates the direct muonic decay of the Higgs boson. The decay into muons has several advantages: it provides a very clear final state with two muons of opposite charge, which can easily be detected. In addition, the muonic final state has an excellent mass resolution, such that an observed resonance can be pinned down in one of its key properties: its mass. Unfortunately, the decay of a Standard Model Higgs boson into a pair of muons is very rare, only two out of 10000 Higgs bosons are predicted to exhibit this decay. On top of that, the non-resonant Standard Model background arising from the Z/γ * →μμ process has a very similar signature, while possessing a much higher cross-section. For one produced Higgs boson, there are approximately 1.5 million Z bosons produced at the LHC for a centre-of-mass energy of 8 TeV. Two related analyses are presented in this thesis: the investigation of 20.7 fb -1 of the proton-proton collision dataset recorded by the ATLAS detector in 2012, referred to as standalone analysis, and the combined analysis as the search in the full run-I dataset consisting of proton-proton collision data recorded in 2011 and 2012, which corresponds to an integrated luminosity of L int =24.8 fb -1 . In each case, the dimuon invariant mass spectrum is examined for a narrow resonance on top of the continuous background distribution. The dimuon phenomenology and ATLAS detector performance serve as the foundations to develop analytical models describing the spectra. Using these analytical parametrisations for the signal and background mass distributions, the sensitivity of the analyses to systematic uncertainties due to Monte-Carlo simulation mismodeling are minimised. These residual systematic uncertainties are

  18. Floquet states of a kicked particle in a singular potential: Exponential and power-law profiles

    Science.gov (United States)

    Paul, Sanku; Santhanam, M. S.

    2018-03-01

    It is well known that, in the chaotic regime, all the Floquet states of kicked rotor system display an exponential profile resulting from dynamical localization. If the kicked rotor is placed in an additional stationary infinite potential well, its Floquet states display power-law profile. It has also been suggested in general that the Floquet states of periodically kicked systems with singularities in the potential would have power-law profile. In this work, we study the Floquet states of a kicked particle in finite potential barrier. By varying the height of finite potential barrier, the nature of transition in the Floquet state from exponential to power-law decay profile is studied. We map this system to a tight-binding model and show that the nature of decay profile depends on energy band spanned by the Floquet states (in unperturbed basis) relative to the potential height. This property can also be inferred from the statistics of Floquet eigenvalues and eigenvectors. This leads to an unusual scenario in which the level spacing distribution, as a window in to the spectral correlations, is not a unique characteristic for the entire system.

  19. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  20. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  1. A note on exponential convergence of neural networks with unbounded distributed delays

    Energy Technology Data Exchange (ETDEWEB)

    Chu Tianguang [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)]. E-mail: chutg@pku.edu.cn; Yang Haifeng [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)

    2007-12-15

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network.

  2. A note on exponential convergence of neural networks with unbounded distributed delays

    International Nuclear Information System (INIS)

    Chu Tianguang; Yang Haifeng

    2007-01-01

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network

  3. Top quark rare decays via loop-induced FCNC interactions in extended mirror fermion model

    Science.gov (United States)

    Hung, P. Q.; Lin, Yu-Xiang; Nugroho, Chrisna Setyo; Yuan, Tzu-Chiang

    2018-02-01

    Flavor changing neutral current (FCNC) interactions for a top quark t decays into Xq with X represents a neutral gauge or Higgs boson, and q a up- or charm-quark are highly suppressed in the Standard Model (SM) due to the Glashow-Iliopoulos-Miami mechanism. Whilst current limits on the branching ratios of these processes have been established at the order of 10-4 from the Large Hadron Collider experiments, SM predictions are at least nine orders of magnitude below. In this work, we study some of these FCNC processes in the context of an extended mirror fermion model, originally proposed to implement the electroweak scale seesaw mechanism for non-sterile right-handed neutrinos. We show that one can probe the process t → Zc for a wide range of parameter space with branching ratios varying from 10-6 to 10-8, comparable with various new physics models including the general two Higgs doublet model with or without flavor violations at tree level, minimal supersymmetric standard model with or without R-parity, and extra dimension model.

  4. Exponential Shear Flow of Linear, Entangled Polymeric Liquids

    DEFF Research Database (Denmark)

    Neergaard, Jesper; Park, Kyungho; Venerus, David C.

    2000-01-01

    A previously proposed reptation model is used to interpret exponential shear flow data taken on an entangled polystyrenesolution. Both shear and normal stress measurements are made during exponential shear using mechanical means. The model iscapable of explaining all trends seen in the data......, and suggests a novel analysis of the data. This analysis demonstrates thatexponential shearing flow is no more capable of stretching polymer chains than is inception of steady shear at comparableinstantaneous shear rates. In fact, all exponential shear flow stresses measured are bounded quantitatively...

  5. On the gravitational wave production from the decay of the Standard Model Higgs field after inflation

    CERN Document Server

    Figueroa, Daniel G; Torrentí, Francisco

    2016-01-01

    During or towards the end of inflation, the Standard Model (SM) Higgs forms a condensate with a large amplitude. Following inflation, the condensate oscillates, decaying non-perturbatively into the rest of the SM species. The resulting out-of-equilibrium dynamics converts a fraction of the energy available into gravitational waves (GW). We study this process using classical lattice simulations in an expanding box, following the energetically dominant electroweak gauge bosons $W^\\pm$ and $Z$. We characterize the GW spectrum as a function of the running couplings, Higgs initial amplitude, and post-inflationary expansion rate. As long as the SM is decoupled from the inflationary sector, the generation of this background is universally expected, independently of the nature of inflation. Our study demonstrates the efficiency of GW emission by gauge fields undergoing parametric resonance. The initial energy of the Higgs condensate represents however, only a tiny fraction of the inflationary energy. Consequently, th...

  6. Representation of radiative strength functions within a practical model of cascade gamma decay

    Energy Technology Data Exchange (ETDEWEB)

    Vu, D. C., E-mail: vuconghnue@gmail.com; Sukhovoj, A. M., E-mail: suchovoj@nf.jinr.ru; Mitsyna, L. V., E-mail: mitsyna@nf.jinr.ru; Zeinalov, Sh., E-mail: zeinal@nf.jinr.ru [Joint Institute for Nuclear Research (Russian Federation); Jovancevic, N., E-mail: nikola.jovancevic@df.uns.ac.rs; Knezevic, D., E-mail: david.knezevic@df.uns.ac.rs; Krmar, M., E-mail: krmar@df.uns.ac.rs [University of Novi Sad, Department of Physics, Faculty of Sciences (Serbia); Dragic, A., E-mail: dragic@ipb.ac.rs [Institute of Physics Belgrade (Serbia)

    2017-03-15

    A practical model developed at the Joint Institute for Nuclear Research (JINR, Dubna) in order to describe the cascade gamma decay of neutron resonances makes it possible to determine simultaneously, from an approximation of the intensities of two-step cascades, parameters of nuclear level densities and partial widths with respect to the emission of nuclear-reaction products. The number of the phenomenological ideas used isminimized in themodel version considered in the present study. An analysis of new results confirms what was obtained earlier for the dependence of dynamics of the interaction of fermion and boson nuclear states on the nuclear shape. From the ratio of the level densities for excitations of the vibrational and quasiparticle types, it also follows that this interaction manifests itself in the region around the neutron binding energy and is probably different in nuclei that have different parities of nucleons.

  7. Identification of shell-model states in $^{135}$Sb populated via $\\beta^{-}$ decay of $^{135}$Sn

    CERN Document Server

    Shergur, J; Brown, B A; Cederkäll, J; Dillmann, I; Fraile-Prieto, L M; Hoff, P; Joinet, A; Köster, U; Kratz, K L; Pfeiffer, B; Walters, W B; Wöhr, A

    2005-01-01

    The $\\beta$- decay of $^{135}$Sn was studied at CERN/ISOLDE using a resonance ionization laser ion source and mass separator to achieve elemental and mass selectivity, respectively. $\\gamma$-ray singles and $\\gamma\\gamma$ coincidence spectra were collected as a function of time with the laser on and with the laser off. These data were used to establish the positions of new levels in $^{135}$Sb, including new low-spin states at 440 and 798 keV, which are given tentative spin and parity assignments of 3/2$^{+}$ and 9/2$^{+}$, respectively. The observed levels of $^{135}$Sb are compared with shell-model calculations using different single-particle energies and different interactions.

  8. The probability of the false vacuum decay

    International Nuclear Information System (INIS)

    Kiselev, V.; Selivanov, K.

    1983-01-01

    The closed expession for the probability of the false vacuum decay in (1+1) dimensions is given. The probability of false vacuum decay is expessed as the product of exponential quasiclassical factor and a functional determinant of the given form. The method for calcutation of this determinant is developed and a complete answer for (1+1) dimensions is given

  9. Pulsational Pair-instability Model for Superluminous Supernova PTF12dam:Interaction and Radioactive Decay

    Energy Technology Data Exchange (ETDEWEB)

    Tolstov, Alexey; Nomoto, Ken’ichi; Blinnikov, Sergei; Quimby, Robert [Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan); Sorokina, Elena [Sternberg Astronomical Institute, M.V.Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Baklanov, Petr, E-mail: alexey.tolstov@ipmu.jp [Institute for Theoretical and Experimental Physics (ITEP), 117218 Moscow (Russian Federation)

    2017-02-01

    Being a superluminous supernova, PTF12dam can be explained by a {sup 56}Ni-powered model, a magnetar-powered model, or an interaction model. We propose that PTF12dam is a pulsational pair-instability supernova, where the outer envelope of a progenitor is ejected during the pulsations. Thus, it is powered by a double energy source: radioactive decay of {sup 56}Ni and a radiative shock in a dense circumstellar medium. To describe multicolor light curves and spectra, we use radiation-hydrodynamics calculations of the STELLA code. We found that light curves are well described in the model with 40 M {sub ⊙} ejecta and 20–40 M {sub ⊙} circumstellar medium. The ejected {sup 56}Ni mass is about 6 M {sub ⊙}, which results from explosive nucleosynthesis with large explosion energy (2–3)×10{sup 52} erg. In comparison with alternative scenarios of pair-instability supernova and magnetar-powered supernova, in the interaction model, all the observed main photometric characteristics are well reproduced: multicolor light curves, color temperatures, and photospheric velocities.

  10. Accelerating cosmologies from exponential potentials

    International Nuclear Information System (INIS)

    Neupane, Ishwaree P.

    2003-11-01

    It is learnt that exponential potentials of the form V ∼ exp(-2cφ/M p ) arising from the hyperbolic or flux compactification of higher-dimensional theories are of interest for getting short periods of accelerated cosmological expansions. Using a similar potential but derived for the combined case of hyperbolic-flux compactification, we study a four-dimensional flat (or open) FRW cosmologies and give analytic (and numerical) solutions with exponential behavior of scale factors. We show that, for the M-theory motivated potentials, the cosmic acceleration of the universe can be eternal if the spatial curvature of the 4d spacetime is negative, while the acceleration is only transient for a spatially flat universe. We also briefly discuss about the mass of massive Kaluza-Klein modes and the dynamical stabilization of the compact hyperbolic extra dimensions. (author)

  11. Science in an Exponential World

    Science.gov (United States)

    Szalay, Alexander

    The amount of scientific information is doubling every year. This exponential growth is fundamentally changing every aspect of the scientific process - the collection, analysis and dissemination of scientific information. Our traditional paradigm for scientific publishing assumes a linear world, where the number of journals and articles remains approximately constant. The talk presents the challenges of this new paradigm and shows examples of how some disciplines are trying to cope with the data avalanche. In astronomy, the Virtual Observatory is emerging as a way to do astronomy in the 21st century. Other disciplines are also in the process of creating their own Virtual Observatories, on every imaginable scale of the physical world. We will discuss how long this exponential growth can continue.

  12. Exponential asymptotics of homoclinic snaking

    International Nuclear Information System (INIS)

    Dean, A D; Matthews, P C; Cox, S M; King, J R

    2011-01-01

    We study homoclinic snaking in the cubic-quintic Swift–Hohenberg equation (SHE) close to the onset of a subcritical pattern-forming instability. Application of the usual multiple-scales method produces a leading-order stationary front solution, connecting the trivial solution to the patterned state. A localized pattern may therefore be constructed by matching between two distant fronts placed back-to-back. However, the asymptotic expansion of the front is divergent, and hence should be truncated. By truncating optimally, such that the resultant remainder is exponentially small, an exponentially small parameter range is derived within which stationary fronts exist. This is shown to be a direct result of the 'locking' between the phase of the underlying pattern and its slowly varying envelope. The locking mechanism remains unobservable at any algebraic order, and can only be derived by explicitly considering beyond-all-orders effects in the tail of the asymptotic expansion, following the method of Kozyreff and Chapman as applied to the quadratic-cubic SHE (Chapman and Kozyreff 2009 Physica D 238 319–54, Kozyreff and Chapman 2006 Phys. Rev. Lett. 97 44502). Exponentially small, but exponentially growing, contributions appear in the tail of the expansion, which must be included when constructing localized patterns in order to reproduce the full snaking diagram. Implicit within the bifurcation equations is an analytical formula for the width of the snaking region. Due to the linear nature of the beyond-all-orders calculation, the bifurcation equations contain an analytically indeterminable constant, estimated in the previous work by Chapman and Kozyreff using a best fit approximation. A more accurate estimate of the equivalent constant in the cubic-quintic case is calculated from the iteration of a recurrence relation, and the subsequent analytical bifurcation diagram compared with numerical simulations, with good agreement

  13. Limit laws for exponential families

    OpenAIRE

    Balkema, August A.; Klüppelberg, Claudia; Resnick, Sidney I.

    1999-01-01

    For a real random variable [math] with distribution function [math] , define ¶ [math] ¶ The distribution [math] generates a natural exponential family of distribution functions [math] , where ¶ [math] ¶ We study the asymptotic behaviour of the distribution functions [math] as [math] increases to [math] . If [math] then [math] pointwise on [math] . It may still be possible to obtain a non-degenerate weak limit law [math] by choosing suitable scaling and centring constants [math] an...

  14. Modeling of the behavior of radon and its decay products in dwelling, and experimental validation of the model

    International Nuclear Information System (INIS)

    Gouronnec, A.M.; Robe, M.C.; Montassier, N.; Boulaud, D.

    1993-01-01

    A model of the type written by Jacobi is adapted to indoor air to describe the behavior of radon and its decay products within a dwelling, and is then adapted to a system of several stories. To start the validation of the model, computed data are compared with field measurements. The first observations we may make are that the model is consistent with data we have. But it is important to develop an exhaustive set of experimental data and to obtain as faithful as possible a representation of the mean situation; this specially concerns the ventilation rate of the enclosure and the rate of attachment to airborne particles. Further work should also be done to model deposition on surfaces. (orig.). (6 refs., 4 tabs.)

  15. Decay properties of charm and bottom mesons in a quantum isotonic nonlinear oscillator potential model

    Energy Technology Data Exchange (ETDEWEB)

    Rahmani, S.; Hassanabadi, H. [Shahrood University of Technology, Physics Department, Shahrood (Iran, Islamic Republic of)

    2017-09-15

    Employing generalized quantum isotonic oscillator potential we determine wave function for mesonic system in nonrelativistic formalism. Then we investigate branching ratios of leptonic decays for heavy-light mesons including a charm quark. Next, by applying the Isgur-Wise function we obtain branching ratios of semileptonic decays for mesons including a bottom quark. The weak decay of the B{sub c} meson is also analyzed to study the life time. Comparison with other available theoretical approaches is presented. (orig.)

  16. Probing new physics models of neutrinoless double beta decay with SuperNEMO

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, R. [CNRS/IN2P3, IPHC, Universite de Strasbourg, Strasbourg (France); Augier, C.; Bongrand, M.; Garrido, X.; Jullian, S.; Sarazin, X.; Simard, L. [CNRS/IN2P3, LAL, Universite Paris-Sud 11, Orsay (France); Baker, J.; Caffrey, A.J.; Horkley, J.J.; Riddle, C.L. [INL, Idaho Falls, ID (United States); Barabash, A.S.; Konovalov, S.I.; Umatov, V.I.; Vanyushin, I.A. [Institute of Theoretical and Experimental Physics, Moscow (Russian Federation); Basharina-Freshville, A.; Evans, J.J.; Flack, R.; Holin, A.; Kauer, M.; Richards, B.; Saakyan, R.; Thomas, J.; Vasiliev, V.; Waters, D. [University College London, London (United Kingdom); Brudanin, V.; Egorov, V.; Kochetov, O.; Nemchenok, I.; Timkin, V.; Tretyak, V.; Vasiliev, R. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Cebrian, S.; Dafni, T.; Irastorza, I.G.; Gomez, H.; Iguaz, F.J.; Luzon, G.; Rodriguez, A. [University of Zaragoza, Zaragoza (Spain); Chapon, A.; Durand, D.; Guillon, B.; Mauger, F. [Universite de Caen, LPC Caen, ENSICAEN, Caen (France); Chauveau, E.; Hubert, P.; Hugon, C.; Lutter, G.; Marquet, C.; Nachab, A.; Nguyen, C.H.; Perrot, F.; Piquemal, F.; Ricol, J.S. [UMR 5797, Universite de Bordeaux, Centre d' Etudes Nucleaires de Bordeaux Gradignan, Gradignan (France); UMR 5797, CNRS/IN2P3, Centre d' Etudes Nucleaires de Bordeaux Gradignan, Gradignan (France); Deppisch, F.F.; Jackson, C.M.; Nasteva, I.; Soeldner-Rembold, S. [Univ. of Manchester (United Kingdom); Diaz, J.; Monrabal, F.; Serra, L.; Yahlali, N. [CSIC - Univ. de Valencia, IFIC (Spain); Fushima, K.I. [Tokushima Univ., Tokushima (Japan); Holy, K.; Povinec, P.P.; Simkovic, F. [Comenius Univ., FMFI, Bratislava (Slovakia); Ishihara, N. [KEK, Tsukuba, Ibaraki (Japan); Kovalenko, V. [CNRS/IN2P3, IPHC, Univ. de Strasbourg (France); Joint Inst. for Nuclear Research, Dubna (Russian Federation); Lamhamdi, T. [USMBA, Fes (Morocco); Lang, K.; Pahlka, R.B. [Univ. of Texas, Austin, TX (United States)] (and others)

    2010-12-15

    The possibility to probe new physics scenarios of light Majorana neutrino exchange and right-handed currents at the planned next generation neutrinoless double {beta} decay experiment SuperNEMO is discussed. Its ability to study different isotopes and track the outgoing electrons provides the means to discriminate different underlying mechanisms for the neutrinoless double {beta} decay by measuring the decay half-life and the electron angular and energy distributions. (orig.)

  17. Testable Implications of Quasi-Hyperbolic and Exponential Time Discounting

    OpenAIRE

    Echenique, Federico; Imai, Taisuke; Saito, Kota

    2014-01-01

    We present the first revealed-preference characterizations of the models of exponential time discounting, quasi-hyperbolic time discounting, and other time-separable models of consumers’ intertemporal decisions. The characterizations provide non-parametric revealed-preference tests, which we take to data using the results of a recent experiment conducted by Andreoni and Sprenger (2012). For such data, we find that less than half the subjects are consistent with exponential discounting, and on...

  18. Multiple preequilibrium decay processes

    International Nuclear Information System (INIS)

    Blann, M.

    1987-11-01

    Several treatments of multiple preequilibrium decay are reviewed with emphasis on the exciton and hybrid models. We show the expected behavior of this decay mode as a function of incident nucleon energy. The algorithms used in the hybrid model treatment are reviewed, and comparisons are made between predictions of the hybrid model and a broad range of experimental results. 24 refs., 20 figs

  19. The McDonald exponentiated gamma distribution and its statistical properties

    OpenAIRE

    Al-Babtain, Abdulhakim A; Merovci, Faton; Elbatal, Ibrahim

    2015-01-01

    Abstract In this paper, we propose a five-parameter lifetime model called the McDonald exponentiated gamma distribution to extend beta exponentiated gamma, Kumaraswamy exponentiated gamma and exponentiated gamma, among several other models. We provide a comprehensive mathematical treatment of this distribution. We derive the moment generating function and the rth moment. We discuss estimation of the parameters by maximum likelihood and provide the information matrix. AMS Subject Classificatio...

  20. A new model for ancient DNA decay based on paleogenomic meta-analysis.

    Science.gov (United States)

    Kistler, Logan; Ware, Roselyn; Smith, Oliver; Collins, Matthew; Allaby, Robin G

    2017-06-20

    The persistence of DNA over archaeological and paleontological timescales in diverse environments has led to a revolutionary body of paleogenomic research, yet the dynamics of DNA degradation are still poorly understood. We analyzed 185 paleogenomic datasets and compared DNA survival with environmental variables and sample ages. We find cytosine deamination follows a conventional thermal age model, but we find no correlation between DNA fragmentation and sample age over the timespans analyzed, even when controlling for environmental variables. We propose a model for ancient DNA decay wherein fragmentation rapidly reaches a threshold, then subsequently slows. The observed loss of DNA over time may be due to a bulk diffusion process in many cases, highlighting the importance of tissues and environments creating effectively closed systems for DNA preservation. This model of DNA degradation is largely based on mammal bone samples due to published genomic dataset availability. Continued refinement to the model to reflect diverse biological systems and tissue types will further improve our understanding of ancient DNA breakdown dynamics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.