WorldWideScience

Sample records for linear-no threshold theory

  1. An experimental test of the linear no-threshold theory of radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1990-01-01

    There is a substantial body of quantitative information on radiation-induced cancer at high dose, but there are no data at low dose. The usual method for estimating effects of low-level radiation is to assume a linear no-threshold dependence. if this linear no-threshold assumption were not used, essentially all fears about radiation would disappear. Since these fears are costing tens of billions of dollars, it is most important that the linear no-threshold theory be tested at low dose. An opportunity for possibly testing the linear no-threshold concept is now available at low dose due to radon in homes. The purpose of this paper is to attempt to use this data to test the linear no-threshold theory

  2. Test of the linear-no threshold theory of radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1994-01-01

    We recently completed a compilation of radon measurements from available sources which gives the average radon level, in homes for 1730 counties, well over half of all U.S. counties and comprising about 90% of the total U.S. population. Epidemiologists normally study the relationship between mortality risks to individuals, m, vs their personal exposure, r, whereas an ecological study like ours deals with the relationship between the average risk to groups of individuals (population of counties) and their average exposure. It is well known to epidemiologists that, in general, the average dose does not determine the average risk, and to assume otherwise is called 'the ecological fallacy'. However, it is easy to show that, in testing a linear-no threshold theory, 'the ecological fallacy' does not apply; in that theory, the average dose does determine the average risk. This is widely recognized from the fact that 'person-rem' determines the number of deaths. Dividing person-rem by population gives average dose, and dividing number of deaths by population gives mortality rate. Because of the 'ecological fallacy', epidemiology textbooks often state that an ecological study cannot determine a causal relationship between risk and exposure. That may be true, but it is irrelevant here because the purpose of our study is not to determine a causal relationship; it is rather to test the linear-no threshold dependence of m on r. (author)

  3. A test of the linear-no threshold theory of radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1990-01-01

    It has been pointed out that, while an ecological study cannot determine whether radon causes lung cancer, it can test the validity of a linear-no threshold relationship between them. The linear-no threshold theory predicts a substantial positive correlation between the average radon exposure in various counties and their lung cancer mortality rates. Data on living areas of houses in 411 counties from all parts of the United States exhibit, rather, a substantial negative correlation with the slopes of the lines of regression differing from zero by 10 and 7 standard deviations for males and females, respectively, and from the positive slope predicted by the theory by at least 16 and 12 standard deviations. When the data are segmented into 23 groups of states or into 7 regions of the country, the predominantly negative slopes and correlations persist, applying to 18 of the 23 state groups and 6 of the 7 regions. Five state-sponsored studies are analyzed, and four of these give a strong negative slope (the other gives a weak positive slope, in agreement with our data for that state). A strong negative slope is also obtained in our data on basements in 253 counties. A random selection-no charge study of 39 high and low lung cancer counties (+4 low population states) gives a much stronger negative correlation. When nine potential confounding factors are included in a multiple linear regression analysis, the discrepancy with theory is reduced only to 12 and 8.5 standard deviations for males and females, respectively. When the data are segmented into four groups by population, the multiple regression vs radon level gives a strong negative slope for each of the four groups. Other considerations are introduced to reduce the discrepancy, but it remains very substantial

  4. Checking the foundation: recent radiobiology and the linear no-threshold theory.

    Science.gov (United States)

    Ulsh, Brant A

    2010-12-01

    The linear no-threshold (LNT) theory has been adopted as the foundation of radiation protection standards and risk estimation for several decades. The "microdosimetric argument" has been offered in support of the LNT theory. This argument postulates that energy is deposited in critical cellular targets by radiation in a linear fashion across all doses down to zero, and that this in turn implies a linear relationship between dose and biological effect across all doses. This paper examines whether the microdosimetric argument holds at the lowest levels of biological organization following low dose, low dose-rate exposures to ionizing radiation. The assumptions of the microdosimetric argument are evaluated in light of recent radiobiological studies on radiation damage in biological molecules and cellular and tissue level responses to radiation damage. There is strong evidence that radiation initially deposits energy in biological molecules (e.g., DNA) in a linear fashion, and that this energy deposition results in various forms of prompt DNA damage that may be produced in a pattern that is distinct from endogenous (e.g., oxidative) damage. However, a large and rapidly growing body of radiobiological evidence indicates that cell and tissue level responses to this damage, particularly at low doses and/or dose-rates, are nonlinear and may exhibit thresholds. To the extent that responses observed at lower levels of biological organization in vitro are predictive of carcinogenesis observed in vivo, this evidence directly contradicts the assumptions upon which the microdosimetric argument is based.

  5. Test of the linear-no threshold theory of radiation carcinogenesis for inhaled radon decay products

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1995-01-01

    Data on lung cancer mortality rates vs. average radon concentration in homes for 1,601 U.S. counties are used to test the linear-no threshold theory. The widely recognized problems with ecological studies, as applied to this work, are addressed extensively. With or without corrections for variations in smoking prevalence, there is a strong tendency for lung cancer rates to decrease with increasing radon exposure, in sharp contrast to the increase expected from the theory. The discrepancy in slope is about 20 standard deviations. It is shown that uncertainties in lung cancer rates, radon exposures, and smoking prevalence are not important and that confounding by 54 socioeconomic factors, by geography, and by altitude and climate can explain only a small fraction of the discrepancy. Effects of known radon-smoking prevalence correlations - rural people have higher radon levels and smoke less than urban people, and smokers are exposed to less radon than non-smokers - are calculated and found to be trivial. In spite of extensive efforts, no potential explanation for the discrepancy other than failure of the linear-no threshold theory for carcinogenesis from inhaled radon decay products could be found. (author)

  6. Test of the linear-no threshold theory of radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1998-01-01

    It is shown that testing the linear-no threshold theory (L-NT) of radiation carcinogenesis is extremely important and that lung cancer resulting from exposure to radon in homes is the best tool for doing this. A study of lung cancer rates vs radon exposure in U.S. Counties, reported in 1975, is reviewed. It shows, with extremely powerful statistics, that lung cancer rates decrease with increasing radon exposure, in sharp contrast to the prediction of L-NT, with a discrepancy of over 20 standard deviations. Very extensive efforts were made to explain an appreciable part of this discrepancy consistently with L-NT, with no success; it was concluded that L-NT fails, grossly exaggerating the cancer risk of low level radiation. Two updating studies reported in 1996 are also reviewed. New updating studies utilizing more recent lung cancer statistics and considering 450 new potential confounding factors are reported. All updates reinforce the previous conclusion, and the discrepancy with L-NT is increased. (author)

  7. Multi-stratified multiple regression tests of the linear/no-threshold theory of radon-induced lung cancer

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1992-01-01

    A plot of lung-cancer rates versus radon exposures in 965 US counties, or in all US states, has a strong negative slope, b, in sharp contrast to the strong positive slope predicted by linear/no-threshold theory. The discrepancy between these slopes exceeds 20 standard deviations (SD). Including smoking frequency in the analysis substantially improves fits to a linear relationship but has little effect on the discrepancy in b, because correlations between smoking frequency and radon levels are quite weak. Including 17 socioeconomic variables (SEV) in multiple regression analysis reduces the discrepancy to 15 SD. Data were divided into segments by stratifying on each SEV in turn, and on geography, and on both simultaneously, giving over 300 data sets to be analyzed individually, but negative slopes predominated. The slope is negative whether one considers only the most urban counties or only the most rural; only the richest or only the poorest; only the richest in the South Atlantic region or only the poorest in that region, etc., etc.,; and for all the strata in between. Since this is an ecological study, the well-known problems with ecological studies were investigated and found not to be applicable here. The open-quotes ecological fallacyclose quotes was shown not to apply in testing a linear/no-threshold theory, and the vulnerability to confounding is greatly reduced when confounding factors are only weakly correlated with radon levels, as is generally the case here. All confounding factors known to correlate with radon and with lung cancer were investigated quantitatively and found to have little effect on the discrepancy

  8. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    International Nuclear Information System (INIS)

    Beyea, Jan

    2017-01-01

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were

  9. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    Energy Technology Data Exchange (ETDEWEB)

    Beyea, Jan, E-mail: jbeyea@cipi.com

    2017-04-15

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were

  10. Validity of the linear no-threshold theory of radiation carcinogenesis at low doses

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1999-01-01

    A great deal is known about the cancer risk of high radiation doses from studies of Japanese A-bomb survivors, patients exposed for medical therapy, occupational exposures, etc. But the vast majority of important applications deal with much lower doses, usually accumulated at much lower dose rates, referred to as 'low-level radiation' (LLR). Conventionally, the cancer risk from LLR has been estimated by the use of linear no-threshold theory (LNT). For example, it is assumed that the cancer risk from 0 01 Sr (100 mrem) of dose is 0 01 times the risk from 1 Sv (100 rem). In recent years, the former risk estimates have often been reduced by a 'dose and dose rate reduction factor', which is taken to be a factor of 2. But otherwise, the LNT is frequently used for doses as low as one hundred-thousandth of those for which there is direct evidence of cancer induction by radiation. It is the origin of the commonly used expression 'no level of radiation is safe' and the consequent public fear of LLR. The importance of this use of the LNT can not be exaggerated and is used in many applications in the nuclear industry. The LNT paradigm has also been carried over to chemical carcinogens, leading to severe restrictions on use of cleaning fluids, organic chemicals, pesticides, etc. If the LNT were abandoned for radiation, it would probably also be abandoned for chemical carcinogens. In view of these facts, it is important to consider the validity of the LNT. That is the purpose of this paper. (author)

  11. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations).

    Science.gov (United States)

    Beyea, Jan

    2017-04-01

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Radiation hormesis and the linear-no-threshold assumption

    CERN Document Server

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  13. Problems in the radon versus lung cancer test of the linear no-threshold theory and a procedure for resolving them

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1996-01-01

    It has been shown that lung cancer rates in U.S. counties, with or without correction for smoking, decrease with increasing radon exposure, in sharp contrast to the increase predicted by the linear-no-threshold (LNT) theory. The discrepancy is by 20 standard deviations, and very extensive efforts to explain it were not successful. Unless a plausible explanation for this discrepancy (or conflicting evidence) can be found, continued use of the LNT theory is a violation of open-quotes the scientific method.close quotes Nevertheless, LNT continues to be accepted and used by all official and governmental organizations, such as the International Commission on Radiological Protection, the National Council on Radiation Protection and Measurements, the Council on Radiation Protection and Measurements, the National Academy of Sciences - U.S. Nuclear Regulatory Commission Board of Radiation Effects Research, Environmental Protection Agency etc., and there has been no move by any of these bodies to discontinue or limit its use. Assuming that they rely on the scientific method, this clearly implies that they have a plausible explanation for the discrepancy. The author has made great efforts to discover these 'plausible explanations' by inquiries through various channels, and the purpose of this paper is to describe and discuss them

  14. Thresholds, switches and hysteresis in hydrology from the pedon to the catchment scale: a non-linear systems theory

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Hysteresis is a rate-independent non-linearity that is expressed through thresholds, switches, and branches. Exceedance of a threshold, or the occurrence of a turning point in the input, switches the output onto a particular output branch. Rate-independent branching on a very large set of switches with non-local memory is the central concept in the new definition of hysteresis. Hysteretic loops are a special case. A self-consistent mathematical description of hydrological systems with hysteresis demands a new non-linear systems theory of adequate generality. The goal of this paper is to establish this and to show how this may be done. Two results are presented: a conceptual model for the hysteretic soil-moisture characteristic at the pedon scale and a hysteretic linear reservoir at the catchment scale. Both are based on the Preisach model. A result of particular significance is the demonstration that the independent domain model of the soil moisture characteristic due to Childs, Poulavassilis, Mualem and others, is equivalent to the Preisach hysteresis model of non-linear systems theory, a result reminiscent of the reduction of the theory of the unit hydrograph to linear systems theory in the 1950s. A significant reduction in the number of model parameters is also achieved. The new theory implies a change in modelling paradigm.

  15. Linear, no threshold response at low doses of ionizing radiation: ideology, prejudice and science

    International Nuclear Information System (INIS)

    Kesavan, P.C.

    2014-01-01

    The linear, no threshold (LNT) response model assumes that there is no threshold dose for the radiation-induced genetic effects (heritable mutations and cancer), and it forms the current basis for radiation protection standards for radiation workers and the general public. The LNT model is, however, based more on ideology than valid radiobiological data. Further, phenomena such as 'radiation hormesis', 'radioadaptive response', 'bystander effects' and 'genomic instability' are now demonstrated to be radioprotective and beneficial. More importantly, the 'differential gene expression' reveals that qualitatively different proteins are induced by low and high doses. This finding negates the LNT model which assumes that qualitatively similar proteins are formed at all doses. Thus, all available scientific data challenge the LNT hypothesis. (author)

  16. Theory of threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2002-01-01

    Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)

  17. Threshold Theory Tested in an Organizational Setting

    DEFF Research Database (Denmark)

    Christensen, Bo T.; Hartmann, Peter V. W.; Hedegaard Rasmussen, Thomas

    2017-01-01

    A large sample of leaders (N = 4257) was used to test the link between leader innovativeness and intelligence. The threshold theory of the link between creativity and intelligence assumes that below a certain IQ level (approximately IQ 120), there is some correlation between IQ and creative...... potential, but above this cutoff point, there is no correlation. Support for the threshold theory of creativity was found, in that the correlation between IQ and innovativeness was positive and significant below a cutoff point of IQ 120. Above the cutoff, no significant relation was identified, and the two...... correlations differed significantly. The finding was stable across distinct parts of the sample, providing support for the theory, although the correlations in all subsamples were small. The findings lend support to the existence of threshold effects using perceptual measures of behavior in real...

  18. Linear-No-Threshold Default Assumptions for Noncancer and Nongenotoxic Cancer Risks: A Mathematical and Biological Critique.

    Science.gov (United States)

    Bogen, Kenneth T

    2016-03-01

    To improve U.S. Environmental Protection Agency (EPA) dose-response (DR) assessments for noncarcinogens and for nonlinear mode of action (MOA) carcinogens, the 2009 NRC Science and Decisions Panel recommended that the adjustment-factor approach traditionally applied to these endpoints should be replaced by a new default assumption that both endpoints have linear-no-threshold (LNT) population-wide DR relationships. The panel claimed this new approach is warranted because population DR is LNT when any new dose adds to a background dose that explains background levels of risk, and/or when there is substantial interindividual heterogeneity in susceptibility in the exposed human population. Mathematically, however, the first claim is either false or effectively meaningless and the second claim is false. Any dose-and population-response relationship that is statistically consistent with an LNT relationship may instead be an additive mixture of just two quasi-threshold DR relationships, which jointly exhibit low-dose S-shaped, quasi-threshold nonlinearity just below the lower end of the observed "linear" dose range. In this case, LNT extrapolation would necessarily overestimate increased risk by increasingly large relative magnitudes at diminishing values of above-background dose. The fact that chemically-induced apoptotic cell death occurs by unambiguously nonlinear, quasi-threshold DR mechanisms is apparent from recent data concerning this quintessential toxicity endpoint. The 2009 NRC Science and Decisions Panel claims and recommendations that default LNT assumptions be applied to DR assessment for noncarcinogens and nonlinear MOA carcinogens are therefore not justified either mathematically or biologically. © 2015 The Author. Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  19. Reaction thresholds in doubly special relativity

    International Nuclear Information System (INIS)

    Heyman, Daniel; Major, Seth; Hinteleitner, Franz

    2004-01-01

    Two theories of special relativity with an additional invariant scale, 'doubly special relativity', are tested with calculations of particle process kinematics. Using the Judes-Visser modified conservation laws, thresholds are studied in both theories. In contrast with some linear approximations, which allow for particle processes forbidden in special relativity, both the Amelino-Camelia and Magueijo-Smolin frameworks allow no additional processes. To first order, the Amelino-Camelia framework thresholds are lowered and the Magueijo-Smolin framework thresholds may be raised or lowered

  20. Three caveats for linear stability theory: Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Greenside, H.S.

    1984-06-01

    Recent theories and experiments challenge the applicability of linear stability theory near the onset of buoyancy-driven (Rayleigh-Benard) convection. This stability theory, based on small perturbations of infinite parallel rolls, is found to miss several important features of the convective flow. The reason is that the lateral boundaries have a profound influence on the possible wave numbers and flow patterns even for the largest cells studied. Also, the nonlinear growth of incoherent unstable modes distorts the rolls, leading to a spatially disordered and sometimes temporally nonperiodic flow. Finally, the relation of the skewed varicose instability to the onset of turbulence (nonperiodic time dependence) is examined. Linear stability theory may not suffice to predict the onset of time dependence in large cells close to threshold

  1. The risk of low doses of ionising radiation and the linear no threshold relationship debate

    International Nuclear Information System (INIS)

    Tubiana, M.; Masse, R.; Vathaire, F. de; Averbeck, D.; Aurengo, A.

    2007-01-01

    The ICRP and the B.E.I.R. VII reports recommend a linear no threshold (L.N.T.) relationship for the estimation of cancer excess risk induced by ionising radiations (IR), but the 2005 report of Medicine and Science French Academies concludes that it leads to overestimate of risk for low and very low doses. The bases of L.N.T. are challenged by recent biological and animal experimental studies which show that the defence against IR involves the cell microenvironment and the immunologic system. The defence mechanisms against low doses are different and comparatively more effective than for high doses. Cell death is predominant against low doses. DNA repairing is activated against high doses, in order to preserve tissue functions. These mechanisms provide for multicellular organisms an effective and low cost defence system. The differences between low and high doses defence mechanisms are obvious for alpha emitters which show several greys threshold effects. These differences result in an impairment of epidemiological studies which, for statistical power purpose, amalgamate high and low doses exposure data, since it would imply that cancer IR induction and defence mechanisms are similar in both cases. Low IR dose risk estimates should rely on specific epidemiological studies restricted to low dose exposures and taking precisely into account potential confounding factors. The preliminary synthesis of cohort studies for which low dose data (< 100 mSv) were available show no significant risk excess, neither for solid cancer nor for leukemias. (authors)

  2. Surveys of radon levels in homes in the United States: A test of the linear-no-threshold dose-response relationship for radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1987-01-01

    The University of Pittsburgh Radon Project for large scale measurements of radon concentrations in homes is described. Its principal research is to test the linear-no threshold dose-response relationship for radiation carcinogenesis by determining average radon levels in the 25 U.S. counties (within certain population ranges) with highest and lowest lung cancer rates. The theory predicts that the former should have about 3 times higher average radon levels than the latter, under the assumption that any correlation between exposure to radon and exposure to other causes of lung cancer is weak. The validity of this assumption is tested with data on average radon level vs replies to items on questionnaires; there is little correlation between radon levels in houses and smoking habits, educational attainment, or economic status of the occupants, or with urban vs rural environs which is an indicator of exposure to air pollution

  3. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  4. Evaluating the "Threshold Theory": Can Head Impact Indicators Help?

    Science.gov (United States)

    Mihalik, Jason P; Lynall, Robert C; Wasserman, Erin B; Guskiewicz, Kevin M; Marshall, Stephen W

    2017-02-01

    This study aimed to determine the clinical utility of biomechanical head impact indicators by measuring the sensitivity, specificity, positive predictive value (PV+), and negative predictive value (PV-) of multiple thresholds. Head impact biomechanics (n = 283,348) from 185 football players in one Division I program were collected. A multidisciplinary clinical team independently made concussion diagnoses (n = 24). We dichotomized each impact using diagnosis (yes = 24, no = 283,324) and across a range of plausible impact indicator thresholds (10g increments beginning with a resultant linear head acceleration of 50g and ending with 120g). Some thresholds had adequate sensitivity, specificity, and PV-. All thresholds had low PV+, with the best recorded PV+ less than 0.4% when accounting for all head impacts sustained by our sample. Even when conservatively adjusting the frequency of diagnosed concussions by a factor of 5 to account for unreported/undiagnosed injuries, the PV+ of head impact indicators at any threshold was no greater than 1.94%. Although specificity and PV- appear high, the low PV+ would generate many unnecessary evaluations if these indicators were the sole diagnostic criteria. The clinical diagnostic value of head impact indicators is considerably questioned by these data. Notwithstanding, valid sensor technologies continue to offer objective data that have been used to improve player safety and reduce injury risk.

  5. Permitted and forbidden sets in symmetric threshold-linear networks.

    Science.gov (United States)

    Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques

    2003-03-01

    The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.

  6. Linear theory on temporal instability of megahertz faraday waves for monodisperse microdroplet ejection.

    Science.gov (United States)

    Tsai, Shirley C; Tsai, Chen S

    2013-08-01

    A linear theory on temporal instability of megahertz Faraday waves for monodisperse microdroplet ejection based on mass conservation and linearized Navier-Stokes equations is presented using the most recently observed micrometer- sized droplet ejection from a millimeter-sized spherical water ball as a specific example. The theory is verified in the experiments utilizing silicon-based multiple-Fourier horn ultrasonic nozzles at megahertz frequency to facilitate temporal instability of the Faraday waves. Specifically, the linear theory not only correctly predicted the Faraday wave frequency and onset threshold of Faraday instability, the effect of viscosity, the dynamics of droplet ejection, but also established the first theoretical formula for the size of the ejected droplets, namely, the droplet diameter equals four-tenths of the Faraday wavelength involved. The high rate of increase in Faraday wave amplitude at megahertz drive frequency subsequent to onset threshold, together with enhanced excitation displacement on the nozzle end face, facilitated by the megahertz multiple Fourier horns in resonance, led to high-rate ejection of micrometer- sized monodisperse droplets (>10(7) droplets/s) at low electrical drive power (<;1 W) with short initiation time (<;0.05 s). This is in stark contrast to the Rayleigh-Plateau instability of a liquid jet, which ejects one droplet at a time. The measured diameters of the droplets ranging from 2.2 to 4.6 μm at 2 to 1 MHz drive frequency fall within the optimum particle size range for pulmonary drug delivery.

  7. The Theory of Linear Prediction

    CERN Document Server

    Vaidyanathan, PP

    2007-01-01

    Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vecto

  8. Linear contextual modal type theory

    DEFF Research Database (Denmark)

    Schack-Nielsen, Anders; Schürmann, Carsten

    Abstract. When one implements a logical framework based on linear type theory, for example the Celf system [?], one is immediately con- fronted with questions about their equational theory and how to deal with logic variables. In this paper, we propose linear contextual modal type theory that gives...... a mathematical account of the nature of logic variables. Our type theory is conservative over intuitionistic contextual modal type theory proposed by Nanevski, Pfenning, and Pientka. Our main contributions include a mechanically checked proof of soundness and a working implementation....

  9. Employing Theories Far beyond Their Limits - Linear Dichroism Theory.

    Science.gov (United States)

    Mayerhöfer, Thomas G

    2018-05-15

    Using linear polarized light, it is possible in case of ordered structures, such as stretched polymers or single crystals, to determine the orientation of the transition moments of electronic and vibrational transitions. This not only helps to resolve overlapping bands, but also assigning the symmetry species of the transitions and to elucidate the structure. To perform spectral evaluation quantitatively, a sometimes "Linear Dichroism Theory" called approach is very often used. This approach links the relative orientation of the transition moment and polarization direction to the quantity absorbance. This linkage is highly questionable for several reasons. First of all, absorbance is a quantity that is by its definition not compatible with Maxwell's equations. Furthermore, absorbance seems not to be the quantity which is generally compatible with linear dichroism theory. In addition, linear dichroism theory disregards that it is not only the angle between transition moment and polarization direction, but also the angle between sample surface and transition moment, that influences band shape and intensity. Accordingly, the often invoked "magic angle" has never existed and the orientation distribution influences spectra to a much higher degree than if linear dichroism theory would hold strictly. A last point that is completely ignored by linear dichroism theory is the fact that partially oriented or randomly-oriented samples usually consist of ordered domains. It is their size relative to the wavelength of light that can also greatly influence a spectrum. All these findings can help to elucidate orientation to a much higher degree by optical methods than currently thought possible by the users of linear dichroism theory. Hence, it is the goal of this contribution to point out these shortcomings of linear dichroism theory to its users to stimulate efforts to overcome the long-lasting stagnation of this important field. © 2018 Wiley-VCH Verlag GmbH & Co. KGa

  10. Linear non-threshold (LNT) radiation hazards model and its evaluation

    International Nuclear Information System (INIS)

    Min Rui

    2011-01-01

    In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)

  11. Molecular biology, epidemiology, and the demise of the linear no-threshold hypothesis

    International Nuclear Information System (INIS)

    Pollycove, M.

    1998-01-01

    The LNT hypothesis is the basic principle of all radiation protection policy. This theory assumes that all radiation doses, even those close to zero, are harmful in linear proportion to dose and that all doses produce a proportionate number of harmful mutations, i.e., mis- or unrepaired DNA alterations. The LNT theory is used to generate collective dose calculations of the number of deaths produced by minute fractions of background radiation. Current molecular biology reveals an enormous amount of relentless metabolic oxidative free radical damage with mis/unrepaired alterations of DNA. The corresponding mis/unrepaired DNA alterations produced by background radiation are negligible. These DNA alterations are effectively disposed of by the DNA damage-control biosystem of antioxidant prevention, enzymatic repair, and mutation removal. High-dose radiation injures this biosystem with associated risk increments of mortality and cancer mortality. Low-dose radiation stimulates DNA damage-control with associated epidemiologic observations of risk decrements of mortality and cancer mortality, i.e., hormesis. How can this 40-year-old LNT paradigm continue to be the operative principle of radiation protection policy despite the contradictory scientific observations of both molecular biology and epidemiology and the lack of any supportive human data? The increase of public fear through repeated statements of deaths caused by 'deadly' radiation has engendered an enormous increase in expenditures now required to 'protect' the public from all applications of nuclear technology: medical, research, energy, disposal, and cleanup remediation. Government funds are allocated to appointed committees, the research they support, and to multiple environmental and regulatory agencies. The LNT theory and multibillion dollar radiation activities have now become a symbiotic self-sustaining powerful political and economic force. (author)

  12. Linear system theory

    Science.gov (United States)

    Callier, Frank M.; Desoer, Charles A.

    1991-01-01

    The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.

  13. Response to, "On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith.".

    Science.gov (United States)

    Beyea, Jan

    2016-07-01

    It is not true that successive groups of researchers from academia and research institutions-scientists who served on panels of the US National Academy of Sciences (NAS)-were duped into supporting a linear no-threshold model (LNT) by the opinions expressed in the genetic panel section of the 1956 "BEAR I" report. Successor reports had their own views of the LNT model, relying on mouse and human data, not fruit fly data. Nor was the 1956 report biased and corrupted, as has been charged in an article by Edward J. Calabrese in this journal. With or without BEAR I, the LNT model would likely have been accepted in the US for radiation protection purposes in the 1950's. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Linear network theory

    CERN Document Server

    Sander, K F

    1964-01-01

    Linear Network Theory covers the significant algebraic aspect of network theory, with minimal reference to practical circuits. The book begins the presentation of network analysis with the exposition of networks containing resistances only, and follows it up with a discussion of networks involving inductance and capacity by way of the differential equations. Classification and description of certain networks, equivalent networks, filter circuits, and network functions are also covered. Electrical engineers, technicians, electronics engineers, electricians, and students learning the intricacies

  15. Linear spaces: history and theory

    OpenAIRE

    Albrecht Beutelspracher

    1990-01-01

    Linear spaces belong to the most fundamental geometric and combinatorial structures. In this paper I would like to give an onerview about the theory of embedding finite linear spaces in finite projective planes.

  16. Theory of linear operations

    CERN Document Server

    Banach, S

    1987-01-01

    This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.

  17. Low level radiation: how does the linear without threshold model provide the safety of Canadian

    International Nuclear Information System (INIS)

    Anon.

    2010-01-01

    The linear without threshold model is a model of risk used worldwide by the most of health organisms of nuclear regulation in order to establish dose limits for workers and public. It is in the heart of the approach adopted by the Canadian commission of nuclear safety (C.C.S.N.) in matter of radiation protection. The linear without threshold model presumes reasonably it exists a direct link between radiation exposure and cancer rate. It does not exist scientific evidence that chronicle exposure to radiation doses under 100 milli sievert (mSv) leads harmful effects on health. Several scientific reports highlighted scientific evidences that seem indicate a low level of radiation is less harmful than the linear without threshold predicts. As the linear without threshold model presumes that any radiation exposure brings risks, the ALARA principle obliges the licensees to get the radiation exposure at the lowest reasonably achievable level, social and economical factors taken into account. ALARA principle constitutes a basic principle in the C.C.S.N. approach in matter of radiation protection; On the radiation protection plan, C.C.S.N. gets a careful approach that allows to provide health and safety of Canadian people and the protection of their environment. (N.C.)

  18. Dynamical fusion thresholds in macroscopic and microscopic theories

    International Nuclear Information System (INIS)

    Davies, K.T.R.; Sierk, A.J.; Nix, J.R.

    1983-01-01

    Macroscopic and microscopic results demonstrating the existence of dynamical fusion thresholds are presented. For macroscopic theories, it is shown that the extra-push dynamics is sensitive to some details of the models used, e.g. the shape parametrization and the type of viscosity. The dependence of the effect upon the charge and angular momentum of the system is also studied. Calculated macroscopic results for mass-symmetric systems are compared to experimental mass-asymmetric results by use of a tentative scaling procedure, which takes into account both the entrance-channel and the saddle-point regions of configuration space. Two types of dynamical fusion thresholds occur in TDHF studies: (1) the microscopic analogue of the macroscopic extra push threshold, and (2) the relatively high energy at which the TDHF angular momentum window opens. Both of these microscopic thresholds are found to be very sensitive to the choice of the effective two-body interaction

  19. A biological basis for the linear non-threshold dose-response relationship for low-level carcinogen exposure

    International Nuclear Information System (INIS)

    Albert, R.E.

    1981-01-01

    This chapter examines low-level dose-response relationships in terms of the two-stage mouse tumorigenesis model. Analyzes the feasibility of the linear non-threshold dose-response model which was first adopted for use in the assessment of cancer risks from ionizing radiation and more recently from chemical carcinogens. Finds that both the interaction of B(a)P with epidermal DNA of the mouse skin and the dose-response relationship for the initiation stage of mouse skin tumorigenesis showed a linear non-threshold dose-response relationship. Concludes that low level exposure to environmental carcinogens has a linear non-threshold dose-response relationship with the carcinogen acting as an initiator and the promoting action being supplied by the factors that are responsible for the background cancer rate in the target tissue

  20. Modification of linear response theory for mean-field approximations

    NARCIS (Netherlands)

    Hütter, M.; Öttinger, H.C.

    1996-01-01

    In the framework of statistical descriptions of many particle systems, the influence of mean-field approximations on the linear response theory is studied. A procedure, analogous to one where no mean-field approximation is involved, is used in order to determine the first order response of the

  1. Numerical linear algebra theory and applications

    CERN Document Server

    Beilina, Larisa; Karchevskii, Mikhail

    2017-01-01

    This book combines a solid theoretical background in linear algebra with practical algorithms for numerical solution of linear algebra problems. Developed from a number of courses taught repeatedly by the authors, the material covers topics like matrix algebra, theory for linear systems of equations, spectral theory, vector and matrix norms combined with main direct and iterative numerical methods, least squares problems, and eigen problems. Numerical algorithms illustrated by computer programs written in MATLAB® are also provided as supplementary material on SpringerLink to give the reader a better understanding of professional numerical software for the solution of real-life problems. Perfect for a one- or two-semester course on numerical linear algebra, matrix computation, and large sparse matrices, this text will interest students at the advanced undergraduate or graduate level.

  2. Algebraic Theory of Linear Viscoelastic Nematodynamics

    International Nuclear Information System (INIS)

    Leonov, Arkady I.

    2008-01-01

    This paper consists of two parts. The first one develops algebraic theory of linear anisotropic nematic 'N-operators' build up on the additive group of traceless second rank 3D tensors. These operators have been implicitly used in continual theories of nematic liquid crystals and weakly elastic nematic elastomers. It is shown that there exists a non-commutative, multiplicative group N 6 of N-operators build up on a manifold in 6D space of parameters. Positive N-operators, which in physical applications hold thermodynamic stability constraints, do not generally form a subgroup of group N 6 . A three-parametric, commutative transversal-isotropic subgroup S 3 subset of N 6 of positive symmetric nematic operators is also briefly discussed. The special case of singular, non-negative symmetric N-operators reveals the algebraic structure of nematic soft deformation modes. The second part of the paper develops a theory of linear viscoelastic nematodynamics applicable to liquid crystalline polymer. The viscous and elastic nematic components in theory are described by using the Leslie-Ericksen-Parodi (LEP) approach for viscous nematics and de Gennes free energy for weakly elastic nematic elastomers. The case of applied external magnetic field exemplifies the occurrence of non-symmetric stresses. In spite of multi-(10) parametric character of the theory, the use of nematic operators presents it in a transparent form. When the magnetic field is absent, the theory is simplified for symmetric case with six parameters, and takes an extremely simple, two-parametric form for viscoelastic nematodynamics with possible soft deformation modes. It is shown that the linear nematodynamics is always reducible to the LEP-like equations where the coefficients are changed for linear memory functionals whose parameters are calculated from original viscosities and moduli

  3. On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2015-01-01

    This paper is an historical assessment of how prominent radiation geneticists in the United States during the 1940s and 1950s successfully worked to build acceptance for the linear no-threshold (LNT) dose–response model in risk assessment, significantly impacting environmental, occupational and medical exposure standards and practices to the present time. Detailed documentation indicates that actions taken in support of this policy revolution were ideologically driven and deliberately and deceptively misleading; that scientific records were artfully misrepresented; and that people and organizations in positions of public trust failed to perform the duties expected of them. Key activities are described and the roles of specific individuals are documented. These actions culminated in a 1956 report by a Genetics Panel of the U.S. National Academy of Sciences (NAS) on Biological Effects of Atomic Radiation (BEAR). In this report the Genetics Panel recommended that a linear dose response model be adopted for the purpose of risk assessment, a recommendation that was rapidly and widely promulgated. The paper argues that current international cancer risk assessment policies are based on fraudulent actions of the U.S. NAS BEAR I Committee, Genetics Panel and on the uncritical, unquestioning and blind-faith acceptance by regulatory agencies and the scientific community. - Highlights: • The 1956 recommendation of the US NAS to use the LNT for risk assessment was adopted worldwide. • This recommendation is based on a falsification of the research record and represents scientific misconduct. • The record misrepresented the magnitude of panelist disagreement of genetic risk from radiation. • These actions enhanced public acceptance of their risk assessment policy recommendations.

  4. On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J., E-mail: edwardc@schoolph.umass.edu

    2015-10-15

    This paper is an historical assessment of how prominent radiation geneticists in the United States during the 1940s and 1950s successfully worked to build acceptance for the linear no-threshold (LNT) dose–response model in risk assessment, significantly impacting environmental, occupational and medical exposure standards and practices to the present time. Detailed documentation indicates that actions taken in support of this policy revolution were ideologically driven and deliberately and deceptively misleading; that scientific records were artfully misrepresented; and that people and organizations in positions of public trust failed to perform the duties expected of them. Key activities are described and the roles of specific individuals are documented. These actions culminated in a 1956 report by a Genetics Panel of the U.S. National Academy of Sciences (NAS) on Biological Effects of Atomic Radiation (BEAR). In this report the Genetics Panel recommended that a linear dose response model be adopted for the purpose of risk assessment, a recommendation that was rapidly and widely promulgated. The paper argues that current international cancer risk assessment policies are based on fraudulent actions of the U.S. NAS BEAR I Committee, Genetics Panel and on the uncritical, unquestioning and blind-faith acceptance by regulatory agencies and the scientific community. - Highlights: • The 1956 recommendation of the US NAS to use the LNT for risk assessment was adopted worldwide. • This recommendation is based on a falsification of the research record and represents scientific misconduct. • The record misrepresented the magnitude of panelist disagreement of genetic risk from radiation. • These actions enhanced public acceptance of their risk assessment policy recommendations.

  5. Adiabatic theory of Wannier threshold laws and ionization cross sections

    International Nuclear Information System (INIS)

    Macek, J.H.; Ovchinnikov, S.Y.

    1994-01-01

    Adiabatic energy eigenvalues of H 2 + are computed for complex values of the internuclear distance R. The infinite number of bound-state eigenenergies are represented by a function ε(R) that is single valued on a multisheeted Riemann surface. A region is found where ε(R) and the corresponding eigenfunctions exhibit harmonic-oscillator structure characteristic of electron motion on a potential saddle. The Schroedinger equation is solved in the adiabatic approximation along a path in the complex R plane to compute ionization cross sections. The cross section thus obtained joins the Wannier threshold region with the keV energy region, but the exponent near the ionization threshold disagrees with well-accepted values. Accepted values are obtained when a lowest-order diabatic correction is employed, indicating that adiabatic approximations do not give the correct zero velocity limit for ionization cross sections. Semiclassical eigenvalues for general top-of-barrier motion are given and the theory is applied to the ionization of atomic hydrogen by electron impact. The theory with a first diabatic correction gives the Wannier threshold law even for this case

  6. Canonical perturbation theory in linearized general relativity theory

    International Nuclear Information System (INIS)

    Gonzales, R.; Pavlenko, Yu.G.

    1986-01-01

    Canonical perturbation theory in linearized general relativity theory is developed. It is shown that the evolution of arbitrary dynamic value, conditioned by the interaction of particles, gravitation and electromagnetic fields, can be presented in the form of a series, each member of it corresponding to the contribution of certain spontaneous or induced process. The main concepts of the approach are presented in the approximation of a weak gravitational field

  7. Linear programming mathematics, theory and algorithms

    CERN Document Server

    1996-01-01

    Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.

  8. Groundwater decline and tree change in floodplain landscapes: Identifying non-linear threshold responses in canopy condition

    Directory of Open Access Journals (Sweden)

    J. Kath

    2014-12-01

    Full Text Available Groundwater decline is widespread, yet its implications for natural systems are poorly understood. Previous research has revealed links between groundwater depth and tree condition; however, critical thresholds which might indicate ecological ‘tipping points’ associated with rapid and potentially irreversible change have been difficult to quantify. This study collated data for two dominant floodplain species, Eucalyptus camaldulensis (river red gum and E. populnea (poplar box from 118 sites in eastern Australia where significant groundwater decline has occurred. Boosted regression trees, quantile regression and Threshold Indicator Taxa Analysis were used to investigate the relationship between tree condition and groundwater depth. Distinct non-linear responses were found, with groundwater depth thresholds identified in the range from 12.1 m to 22.6 m for E. camaldulensis and 12.6 m to 26.6 m for E. populnea beyond which canopy condition declined abruptly. Non-linear threshold responses in canopy condition in these species may be linked to rooting depth, with chronic groundwater decline decoupling trees from deep soil moisture resources. The quantification of groundwater depth thresholds is likely to be critical for management aimed at conserving groundwater dependent biodiversity. Identifying thresholds will be important in regions where water extraction and drying climates may contribute to further groundwater decline. Keywords: Canopy condition, Dieback, Drought, Tipping point, Ecological threshold, Groundwater dependent ecosystems

  9. Linear radial pulsation theory. Lecture 5

    International Nuclear Information System (INIS)

    Cox, A.N.

    1983-01-01

    We describe a method for getting an equilibrium stellar envelope model using as input the total mass, the envelope mass, the surface effective temperature, the total surface luminosity, and the composition of the envelope. Then wih the structure of the envelope model known, we present a method for obtaining the raidal pulsation periods and growth rates for low order modes. The large amplitude pulsations observed for the yellow and red giants and supergiants are always these radial models, but for the stars nearer the main sequence, as for all of our stars and for the white dwarfs, there frequently are nonradial modes occuring also. Application of linear theory radial pulsation theory is made to the giant star sigma Scuti variables, while the linear nonradial theory will be used for the B stars in later lectures

  10. Problems of linear electron (polaron) transport theory in semiconductors

    CERN Document Server

    Klinger, M I

    1979-01-01

    Problems of Linear Electron (Polaron) Transport Theory in Semiconductors summarizes and discusses the development of areas in electron transport theory in semiconductors, with emphasis on the fundamental aspects of the theory and the essential physical nature of the transport processes. The book is organized into three parts. Part I focuses on some general topics in the theory of transport phenomena: the general dynamical theory of linear transport in dissipative systems (Kubo formulae) and the phenomenological theory. Part II deals with the theory of polaron transport in a crystalline semicon

  11. Linear conversion theory on the second harmonic emission from a plasma filament

    International Nuclear Information System (INIS)

    Tan Weihan; Gu Min

    1989-01-01

    The linear conversion theory of laser produced plasma filaments is studied. By calculations for the energy flux of the second harmonic emission on the basis of the planar wave-plasma interaction model, it has been found that there exists no 2ω 0 harmonic emission in the direction perpendicular to the incident laser, in contradiction with the experiments. A linear conversion theory is proposed on the second harmonic emission from a plasma filament and discovered the intense 2ω 0 harmonic emission in the direction perpendicular to the incident laser, which is in agreement with the experiments. (author)

  12. Game Theory and its Relationship with Linear Programming Models ...

    African Journals Online (AJOL)

    Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.

  13. Demystifying nuclear power: the linear non-threshold model and its use for evaluating radiation effects on living organisms

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Alexandre F.; Vasconcelos, Miguel F.; Vergueiro, Sophia M. C.; Lima, Suzylaine S., E-mail: alex.ramos@usp.br [Universidade de São Paulo (USP), SP (Brazil). Núcleo Interdisciplinar de Modelagem de Sistemas Complexos

    2017-07-01

    Recently, a new variable has been introduced on nuclear power expansion policy: public opinion. That variable challenges the nuclear community to develop new programs aiming to educate society sectors interested on energy generation and not necessarily familiarized with concepts of the nuclear eld. Here we approach this challenge by discussing how a misconception about the use of theories in science has misled the interpretation of the Chernobyl's accident consequences. That discussion have been presented for students from fields related with Environmental Sciences and Humanities and have helped to elucidate that an extrapolation such as the Linear Non-Threshold model is a hypothesis to be tested experimentally instead of a theoretical tool with predictive power. (author)

  14. Demystifying nuclear power: the linear non-threshold model and its use for evaluating radiation effects on living organisms

    International Nuclear Information System (INIS)

    Ramos, Alexandre F.; Vasconcelos, Miguel F.; Vergueiro, Sophia M. C.; Lima, Suzylaine S.

    2017-01-01

    Recently, a new variable has been introduced on nuclear power expansion policy: public opinion. That variable challenges the nuclear community to develop new programs aiming to educate society sectors interested on energy generation and not necessarily familiarized with concepts of the nuclear eld. Here we approach this challenge by discussing how a misconception about the use of theories in science has misled the interpretation of the Chernobyl's accident consequences. That discussion have been presented for students from fields related with Environmental Sciences and Humanities and have helped to elucidate that an extrapolation such as the Linear Non-Threshold model is a hypothesis to be tested experimentally instead of a theoretical tool with predictive power. (author)

  15. An enstrophy-based linear and nonlinear receptivity theory

    Science.gov (United States)

    Sengupta, Aditi; Suman, V. K.; Sengupta, Tapan K.; Bhaumik, Swagata

    2018-05-01

    In the present research, a new theory of instability based on enstrophy is presented for incompressible flows. Explaining instability through enstrophy is counter-intuitive, as it has been usually associated with dissipation for the Navier-Stokes equation (NSE). This developed theory is valid for both linear and nonlinear stages of disturbance growth. A previously developed nonlinear theory of incompressible flow instability based on total mechanical energy described in the work of Sengupta et al. ["Vortex-induced instability of an incompressible wall-bounded shear layer," J. Fluid Mech. 493, 277-286 (2003)] is used to compare with the present enstrophy based theory. The developed equations for disturbance enstrophy and disturbance mechanical energy are derived from NSE without any simplifying assumptions, as compared to other classical linear/nonlinear theories. The theory is tested for bypass transition caused by free stream convecting vortex over a zero pressure gradient boundary layer. We explain the creation of smaller scales in the flow by a cascade of enstrophy, which creates rotationality, in general inhomogeneous flows. Linear and nonlinear versions of the theory help explain the vortex-induced instability problem under consideration.

  16. Threshold Concept Theory as an Enabling Constraint: A Facilitated Practitioner Action Research Study

    Science.gov (United States)

    Harlow, Ann; Cowie, Bronwen; McKie, David; Peter, Mira

    2017-01-01

    International interest is growing in how threshold concept theory can transform tertiary teaching and learning. A facilitated practitioner action research project investigating the potential of threshold concepts across several disciplines offers a practical contribution and helps to consolidate this international field of research. In this…

  17. Thresholds and criteria for evaluating and communicating impact significance in environmental statements: 'See no evil, hear no evil, speak no evil'?

    International Nuclear Information System (INIS)

    Wood, Graham

    2008-01-01

    The evaluation and communication of the significance of environmental effects remains a critical yet poorly understood component of EIA theory and practice. Following a conceptual overview of the generic dimensions of impact significance in EIA, this paper reports upon the findings of an empirical study of recent environmental impact statements that considers the treatment of significance for impacts concerning landscape ('see no evil') and noise ('hear no evil'), focussing specifically upon the evaluation and communication of impact significance ('speak no evil') in UK practice. Particular attention is given to the use of significance criteria and thresholds, including the development of a typology of approaches applied within the context of noise and landscape/visual impacts. Following a broader discussion of issues surrounding the formulation, application and interpretation of significance criteria, conclusions and recommendations relevant to wider EIA practice are suggested

  18. Scattering theory of the linear Boltzmann operator

    International Nuclear Information System (INIS)

    Hejtmanek, J.

    1975-01-01

    In time dependent scattering theory we know three important examples: the wave equation around an obstacle, the Schroedinger and the Dirac equation with a scattering potential. In this paper another example from time dependent linear transport theory is added and considered in full detail. First the linear Boltzmann operator in certain Banach spaces is rigorously defined, and then the existence of the Moeller operators is proved by use of the theorem of Cook-Jauch-Kuroda, that is generalized to the case of a Banach space. (orig.) [de

  19. Linear response theory for quantum open systems

    OpenAIRE

    Wei, J. H.; Yan, YiJing

    2011-01-01

    Basing on the theory of Feynman's influence functional and its hierarchical equations of motion, we develop a linear response theory for quantum open systems. Our theory provides an effective way to calculate dynamical observables of a quantum open system at its steady-state, which can be applied to various fields of non-equilibrium condensed matter physics.

  20. Optimal threshold estimation for binary classifiers using game theory.

    Science.gov (United States)

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  1. Polarization properties of below-threshold harmonics from aligned molecules H2+ in linearly polarized laser fields.

    Science.gov (United States)

    Dong, Fulong; Tian, Yiqun; Yu, Shujuan; Wang, Shang; Yang, Shiping; Chen, Yanjun

    2015-07-13

    We investigate the polarization properties of below-threshold harmonics from aligned molecules in linearly polarized laser fields numerically and analytically. We focus on lower-order harmonics (LOHs). Our simulations show that the ellipticity of below-threshold LOHs depends strongly on the orientation angle and differs significantly for different harmonic orders. Our analysis reveals that this LOH ellipticity is closely associated with resonance effects and the axis symmetry of the molecule. These results shed light on the complex generation mechanism of below-threshold harmonics from aligned molecules.

  2. Spectral theories for linear differential equations

    International Nuclear Information System (INIS)

    Sell, G.R.

    1976-01-01

    The use of spectral analysis in the study of linear differential equations with constant coefficients is not only a fundamental technique but also leads to far-reaching consequences in describing the qualitative behaviour of the solutions. The spectral analysis, via the Jordan canonical form, will not only lead to a representation theorem for a basis of solutions, but will also give a rather precise statement of the (exponential) growth rates of various solutions. Various attempts have been made to extend this analysis to linear differential equations with time-varying coefficients. The most complete such extensions is the Floquet theory for equations with periodic coefficients. For time-varying linear differential equations with aperiodic coefficients several authors have attempted to ''extend'' the Foquet theory. The precise meaning of such an extension is itself a problem, and we present here several attempts in this direction that are related to the general problem of extending the spectral analysis of equations with constant coefficients. The main purpose of this paper is to introduce some problems of current research. The primary problem we shall examine occurs in the context of linear differential equations with almost periodic coefficients. We call it ''the Floquet problem''. (author)

  3. Sensitivity theory for general non-linear algebraic equations with constraints

    International Nuclear Information System (INIS)

    Oblow, E.M.

    1977-04-01

    Sensitivity theory has been developed to a high state of sophistication for applications involving solutions of the linear Boltzmann equation or approximations to it. The success of this theory in the field of radiation transport has prompted study of possible extensions of the method to more general systems of non-linear equations. Initial work in the U.S. and in Europe on the reactor fuel cycle shows that the sensitivity methodology works equally well for those non-linear problems studied to date. The general non-linear theory for algebraic equations is summarized and applied to a class of problems whose solutions are characterized by constrained extrema. Such equations form the basis of much work on energy systems modelling and the econometrics of power production and distribution. It is valuable to have a sensitivity theory available for these problem areas since it is difficult to repeatedly solve complex non-linear equations to find out the effects of alternative input assumptions or the uncertainties associated with predictions of system behavior. The sensitivity theory for a linear system of algebraic equations with constraints which can be solved using linear programming techniques is discussed. The role of the constraints in simplifying the problem so that sensitivity methodology can be applied is highlighted. The general non-linear method is summarized and applied to a non-linear programming problem in particular. Conclusions are drawn in about the applicability of the method for practical problems

  4. Threshold defect production in silicon determined by density functional theory molecular dynamics simulations

    International Nuclear Information System (INIS)

    Holmstroem, E.; Kuronen, A.; Nordlund, K.

    2008-01-01

    We studied threshold displacement energies for creating stable Frenkel pairs in silicon using density functional theory molecular dynamics simulations. The average threshold energy over all lattice directions was found to be 36±2 STAT ±2 SYST eV, and thresholds in the directions and were found to be 20±2 SYST eV and 12.5±1.5 SYST eV, respectively. Moreover, we found that in most studied lattice directions, a bond defect complex is formed with a lower threshold than a Frenkel pair. The average threshold energy for producing either a bond defect or a Frenkel pair was found to be 24±1 STAT ±2 SYST eV

  5. Approximate Stream Function wavemaker theory for highly non-linear waves in wave flumes

    DEFF Research Database (Denmark)

    Zhang, H.W.; Schäffer, Hemming Andreas

    2007-01-01

    An approximate Stream Function wavemaker theory for highly non-linear regular waves in flumes is presented. This theory is based on an ad hoe unified wave-generation method that combines linear fully dispersive wavemaker theory and wave generation for non-linear shallow water waves. This is done...... by applying a dispersion correction to the paddle position obtained for non-linear long waves. The method is validated by a number of wave flume experiments while comparing with results of linear wavemaker theory, second-order wavemaker theory and Cnoidal wavemaker theory within its range of application....

  6. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    Science.gov (United States)

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  7. Methods in half-linear asymptotic theory

    Directory of Open Access Journals (Sweden)

    Pavel Rehak

    2016-10-01

    Full Text Available We study the asymptotic behavior of eventually positive solutions of the second-order half-linear differential equation $$ (r(t|y'|^{\\alpha-1}\\hbox{sgn} y''=p(t|y|^{\\alpha-1}\\hbox{sgn} y, $$ where r(t and p(t are positive continuous functions on $[a,\\infty$, $\\alpha\\in(1,\\infty$. The aim of this article is twofold. On the one hand, we show applications of a wide variety of tools, like the Karamata theory of regular variation, the de Haan theory, the Riccati technique, comparison theorems, the reciprocity principle, a certain transformation of dependent variable, and principal solutions. On the other hand, we solve open problems posed in the literature and generalize existing results. Most of our observations are new also in the linear case.

  8. Linear {GLP}-algebras and their elementary theories

    Science.gov (United States)

    Pakhomov, F. N.

    2016-12-01

    The polymodal provability logic {GLP} was introduced by Japaridze in 1986. It is the provability logic of certain chains of provability predicates of increasing strength. Every polymodal logic corresponds to a variety of polymodal algebras. Beklemishev and Visser asked whether the elementary theory of the free {GLP}-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable [1]. For every positive integer n we solve the corresponding question for the logics {GLP}_n that are the fragments of {GLP} with n modalities. We prove that the elementary theory of the free {GLP}_n-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable for all n. We introduce the notion of a linear {GLP}_n-algebra and prove that all free {GLP}_n-algebras generated by the constants \\mathbf{0}, \\mathbf{1} are linear. We also consider the more general case of the logics {GLP}_α whose modalities are indexed by the elements of a linearly ordered set α: we define the notion of a linear algebra and prove the latter result in this case.

  9. Alternative theories of the non-linear negative mass instability

    International Nuclear Information System (INIS)

    Channell, P.J.

    1974-01-01

    A theory non-linear negative mass instability is extended to include resistance. The basic assumption is explained physically and an alternative theory is offered. The two theories are compared computationally. 7 refs., 8 figs

  10. Linearized propulsion theory of flapping airfoils revisited

    Science.gov (United States)

    Fernandez-Feria, Ramon

    2016-11-01

    A vortical impulse theory is used to compute the thrust of a plunging and pitching airfoil in forward flight within the framework of linear potential flow theory. The result is significantly different from the classical one of Garrick that considered the leading-edge suction and the projection in the flight direction of the pressure force. By taking into account the complete vorticity distribution on the airfoil and the wake the mean thrust coefficient contains a new term that generalizes the leading-edge suction term and depends on Theodorsen function C (k) and on a new complex function C1 (k) of the reduced frequency k. The main qualitative difference with Garrick's theory is that the propulsive efficiency tends to zero as the reduced frequency increases to infinity (as 1 / k), in contrast to Garrick's efficiency that tends to a constant (1 / 2). Consequently, for pure pitching and combined pitching and plunging motions, the maximum of the propulsive efficiency is not reached as k -> ∞ like in Garrick's theory, but at a finite value of the reduced frequency that depends on the remaining non-dimensional parameters. The present analytical results are in good agreement with experimental data and numerical results for small amplitude oscillations. Supported by the Ministerio de Economia y Competitividad of Spain Grant No. DPI2013-40479-P.

  11. Bayes linear statistics, theory & methods

    CERN Document Server

    Goldstein, Michael

    2007-01-01

    Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...

  12. Genetic evaluation of calf and heifer survival in Iranian Holstein cattle using linear and threshold models.

    Science.gov (United States)

    Forutan, M; Ansari Mahyari, S; Sargolzaei, M

    2015-02-01

    Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. © 2014 Blackwell Verlag GmbH.

  13. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  14. Linear circuits, systems and signal processing: theory and application

    International Nuclear Information System (INIS)

    Byrnes, C.I.; Saeks, R.E.; Martin, C.F.

    1988-01-01

    In part because of its universal role as a first approximation of more complicated behaviour and in part because of the depth and breadth of its principle paradigms, the study of linear systems continues to play a central role in control theory and its applications. Enhancing more traditional applications to aerospace and electronics, application areas such as econometrics, finance, and speech and signal processing have contributed to a renaissance in areas such as realization theory and classical automatic feedback control. Thus, the last few years have witnessed a remarkable research effort expended in understanding both new algorithms and new paradigms for modeling and realization of linear processes and in the analysis and design of robust control strategies. The papers in this volume reflect these trends in both the theory and applications of linear systems and were selected from the invited and contributed papers presented at the 8th International Symposium on the Mathematical Theory of Networks and Systems held in Phoenix on June 15-19, 1987

  15. On R4 threshold corrections in type IIB string theory and (p,q)-string instantons

    International Nuclear Information System (INIS)

    Kiritsis, E.; Pioline, B.

    1997-01-01

    We obtain the exact non-perturbative thresholds of R 4 terms in type IIB string theory compactified to eight and seven dimensions. These thresholds are given by the perturbative tree-level and one-loop results together with the contribution of the D-instantons and of the (p,q)-string instantons. The invariance under U-duality is made manifest by rewriting the sum as a non-holomorphic-invariant modular function of the corresponding discrete U-duality group. In the eight-dimensional case, the threshold is the sum of an order-1 Eisenstein series for SL(2,Z) and an order-3/2 Eisenstein series for SL(3,Z). The seven-dimensional result is given by the order-3/2 Eisenstein series for SL(5,Z). We also conjecture formulae for the non-perturbative thresholds in lower-dimensional compactifications and discuss the relation with M-theory. (orig.)

  16. Social psychological approach to the problem of threshold

    International Nuclear Information System (INIS)

    Nakayachi, Kazuya

    1999-01-01

    This paper discusses the threshold of carcinogen risk from the viewpoint of social psychology. First, the results of a survey suggesting that renunciation of the Linear No-Threshold (LNT) hypothesis would have no influence on the public acceptance (PA) of nuclear power plants are reported. Second, the relationship between the adoption of the LNT hypothesis and the standardization of management for various risks are discussed. (author)

  17. A Near-Threshold Shape Resonance in the Valence-Shell Photoabsorption of Linear Alkynes

    Energy Technology Data Exchange (ETDEWEB)

    Jacovella, U.; Holland, D. M. P.; Boyé-Péronne, S.; Gans, Bérenger; de Oliveira, N.; Ito, K.; Joyeux, D.; Archer, L. E.; Lucchese, R. R.; Xu, Hong; Pratt, S. T.

    2015-12-17

    The room-temperature photoabsorption spectra of a number of linear alkynes with internal triple bonds (e.g., 2-butyne, 2-pentyne, and 2- and 3-hexyne) show similar resonances just above the lowest ionization threshold of the neutral molecules. These features result in a substantial enhancement of the photoabsorption cross sections relative to the cross sections of alkynes with terminal triple bonds (e.g., propyne, 1-butyne, 1-pentyne,...). Based on earlier work on 2-butyne [Xu et al., J. Chem. Phys. 2012, 136, 154303], these features are assigned to excitation from the neutral highest occupied molecular orbital (HOMO) to a shape resonance with g (l = 4) character and approximate pi symmetry. This generic behavior results from the similarity of the HOMOs in all internal alkynes, as well as the similarity of the corresponding g pi virtual orbital in the continuum. Theoretical calculations of the absorption spectrum above the ionization threshold for the 2- and 3-alkynes show the presence of a shape resonance when the coupling between the two degenerate or nearly degenerate pi channels is included, with a dominant contribution from l = 4. These calculations thus confirm the qualitative arguments for the importance of the l = 4 continuum near threshold for internal alkynes, which should also apply to other linear internal alkynes and alkynyl radicals. The 1-alkynes do not have such high partial waves present in the shape resonance. The lower l partial waves in these systems are consistent with the broader features observed in the corresponding spectra.

  18. Linear algebra and group theory for physicists

    CERN Document Server

    Rao, K N Srinivasa

    2006-01-01

    Professor Srinivasa Rao's text on Linear Algebra and Group Theory is directed to undergraduate and graduate students who wish to acquire a solid theoretical foundation in these mathematical topics which find extensive use in physics. Based on courses delivered during Professor Srinivasa Rao's long career at the University of Mysore, this text is remarkable for its clear exposition of the subject. Advanced students will find a range of topics such as the Representation theory of Linear Associative Algebras, a complete analysis of Dirac and Kemmer algebras, Representations of the Symmetric group via Young Tableaux, a systematic derivation of the Crystallographic point groups, a comprehensive and unified discussion of the Rotation and Lorentz groups and their representations, and an introduction to Dynkin diagrams in the classification of Lie groups. In addition, the first few chapters on Elementary Group Theory and Vector Spaces also provide useful instructional material even at an introductory level. An author...

  19. A local homology theory for linearly compact modules

    International Nuclear Information System (INIS)

    Nguyen Tu Cuong; Tran Tuan Nam

    2004-11-01

    We introduce a local homology theory for linearly modules which is in some sense dual to the local cohomology theory of A. Grothendieck. Some basic properties of local homology modules are shown such as: the vanishing and non-vanishing, the noetherianness of local homology modules. By using duality, we extend some well-known results in theory of local cohomology of A. Grothendieck. (author)

  20. Linearized modified gravity theories with a cosmological term: advance of perihelion and deflection of light

    Science.gov (United States)

    Özer, Hatice; Delice, Özgür

    2018-03-01

    Two different ways of generalizing Einstein’s general theory of relativity with a cosmological constant to Brans–Dicke type scalar–tensor theories are investigated in the linearized field approximation. In the first case a cosmological constant term is coupled to a scalar field linearly whereas in the second case an arbitrary potential plays the role of a variable cosmological term. We see that the former configuration leads to a massless scalar field whereas the latter leads to a massive scalar field. General solutions of these linearized field equations for both cases are obtained corresponding to a static point mass. Geodesics of these solutions are also presented and solar system effects such as the advance of the perihelion, deflection of light rays and gravitational redshift were discussed. In general relativity a cosmological constant has no role in these phenomena. We see that for the Brans–Dicke theory, the cosmological constant also has no effect on these phenomena. This is because solar system observations require very large values of the Brans–Dicke parameter and the correction terms to these phenomena becomes identical to GR for these large values of this parameter. This result is also observed for the theory with arbitrary potential if the mass of the scalar field is very light. For a very heavy scalar field, however, there is no such limit on the value of this parameter and there are ranges of this parameter where these contributions may become relevant in these scales. Galactic and intergalactic dynamics is also discussed for these theories at the latter part of the paper with similar conclusions.

  1. Coarse-graining free theories with gauge symmetries: the linearized case

    International Nuclear Information System (INIS)

    Bahr, Benjamin; Dittrich, Bianca; He Song

    2011-01-01

    Discretizations of continuum theories often do not preserve the gauge symmetry content. This occurs in particular for diffeomorphism symmetry in general relativity, which leads to severe difficulties in both canonical and covariant quantization approaches. We discuss here the method of perfect actions, which attempts to restore gauge symmetries by mirroring exactly continuum physics on a lattice via a coarse graining process. Analytical results can only be obtained via a perturbative approach, for which we consider the first step, namely the coarse graining of the linearized theory. The linearized gauge symmetries are exact also in the discretized theory; hence, we develop a formalism to deal with gauge systems. Finally, we provide a discretization of linearized gravity as well as a coarse graining map and show that with this choice the three-dimensional (3D) linearized gravity action is invariant under coarse graining.

  2. Formulated linear programming problems from game theory and its ...

    African Journals Online (AJOL)

    Formulated linear programming problems from game theory and its computer implementation using Tora package. ... Game theory, a branch of operations research examines the various concepts of decision ... AJOL African Journals Online.

  3. A practical threshold concept for simple and reasonable radiation protection

    International Nuclear Information System (INIS)

    Kaneko, Masahito

    2002-01-01

    A half century ago it was assumed for the purpose of protection that radiation risks are linearly proportional at all levels of dose. Linear No-Threshold (LNT) hypothesis has greatly contributed to the minimization of doses received by workers and members of the public, while it has brought about 'radiophobia' and unnecessary over-regulation. Now that the existence of bio-defensive mechanisms such as DNA repair, apoptosis and adaptive response are well recognized, the linearity assumption can be said 'unscientific'. Evidences increasingly imply that there are threshold effects in risk of radiation. A concept of 'practical' thresholds is proposed and the classification of 'stochastic' and 'deterministic' radiation effects should be abandoned. 'Practical' thresholds are dose levels below which induction of detectable radiogenic cancers or hereditary effects are not expected. There seems to be no evidence of deleterious health effects from radiation exposures at the current dose limits (50 mSv/y for workers and 5 mSv/y for members of the public), which have been adopted worldwide in the latter half of the 20th century. Those limits are assumed to have been set below certain 'practical' thresholds. As any workers and members of the public do not gain benefits from being exposed, excepting intentional irradiation for medical purposes, their radiation exposures should be kept below 'practical' thresholds. There is no use of 'justification' and 'optimization' (ALARA) principles, because there are no 'radiation detriments' as far as exposures are maintained below 'practical' thresholds. Accordingly the ethical issue of 'justification' to allow benefit to society to offset radiation detriments to individuals can be resolved. And also the ethical issue of 'optimization' to exchange health or safety for economical gain can be resolved. The ALARA principle should be applied to the probability (risk) of exceeding relevant dose limits instead of applying to normal exposures

  4. Renormalizability of Reggeon field theory taking into account thresholds and ''mass'' terms at D = 2

    International Nuclear Information System (INIS)

    Eremyan, S.S.; Nazaryan, A.E.

    1982-01-01

    It is shown that the inclusion of the Reggeon production thresholds xi 0 = 1n(M 2 /s 0 )roughly-equal2 in Reggeon field theory causes the epsilon-c expansion to become analytic at epsilon-c = 2 and it becomes possible to simultaneously take epsilon-c → 2 and E → 0, which corresponds to the physical dimensionality of space in the limit of asymptotic energies. The introduction of thresholds makes it easier to carry out perturbative calculations at D = 2 by removing the ultraviolet divergences of the theory and is useful for a smooth joining of the perturbative and asymptotic solutions

  5. Linear bosonic and fermionic quantum gauge theories on curved spacetimes

    International Nuclear Information System (INIS)

    Hack, Thomas-Paul; Schenkel, Alexander

    2012-05-01

    We develop a general setting for the quantization of linear bosonic and fermionic field theories subject to local gauge invariance and show how standard examples such as linearized Yang-Mills theory and linearized general relativity fit into this framework. Our construction always leads to a well-defined and gauge-invariant quantum field algebra, the centre and representations of this algebra, however, have to be analysed on a case-by-case basis. We discuss an example of a fermionic gauge field theory where the necessary conditions for the existence of Hilbert space representations are not met on any spacetime. On the other hand, we prove that these conditions are met for the Rarita-Schwinger gauge field in linearized pure N=1 supergravity on certain spacetimes, including asymptotically flat spacetimes and classes of spacetimes with compact Cauchy surfaces. We also present an explicit example of a supergravity background on which the Rarita-Schwinger gauge field can not be consistently quantized.

  6. Linear bosonic and fermionic quantum gauge theories on curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Thomas-Paul [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Schenkel, Alexander [Bergische Univ., Wuppertal (Germany). Fachgruppe Physik

    2012-05-15

    We develop a general setting for the quantization of linear bosonic and fermionic field theories subject to local gauge invariance and show how standard examples such as linearized Yang-Mills theory and linearized general relativity fit into this framework. Our construction always leads to a well-defined and gauge-invariant quantum field algebra, the centre and representations of this algebra, however, have to be analysed on a case-by-case basis. We discuss an example of a fermionic gauge field theory where the necessary conditions for the existence of Hilbert space representations are not met on any spacetime. On the other hand, we prove that these conditions are met for the Rarita-Schwinger gauge field in linearized pure N=1 supergravity on certain spacetimes, including asymptotically flat spacetimes and classes of spacetimes with compact Cauchy surfaces. We also present an explicit example of a supergravity background on which the Rarita-Schwinger gauge field can not be consistently quantized.

  7. Análise genética de escores de avaliação visual de bovinos com modelos bayesianos de limiar e linear Genetic analysis for visual scores of bovines with the linear and threshold bayesian models

    Directory of Open Access Journals (Sweden)

    Carina Ubirajara de Faria

    2008-07-01

    Full Text Available O objetivo deste trabalho foi comparar as estimativas de parâmetros genéticos obtidas em análises bayesianas uni-característica e bi-característica, em modelo animal linear e de limiar, considerando-se as características categóricas morfológicas de bovinos da raça Nelore. Os dados de musculosidade, estrutura física e conformação foram obtidos entre 2000 e 2005, em 3.864 animais de 13 fazendas participantes do Programa Nelore Brasil. Foram realizadas análises bayesianas uni e bi-características, em modelos de limiar e linear. De modo geral, os modelos de limiar e linear foram eficientes na estimação dos parâmetros genéticos para escores visuais em análises bayesianas uni-características. Nas análises bi-características, observou-se que: com utilização de dados contínuos e categóricos, o modelo de limiar proporcionou estimativas de correlação genética de maior magnitude do que aquelas do modelo linear; e com o uso de dados categóricos, as estimativas de herdabilidade foram semelhantes. A vantagem do modelo linear foi o menor tempo gasto no processamento das análises. Na avaliação genética de animais para escores visuais, o uso do modelo de limiar ou linear não influenciou a classificação dos animais, quanto aos valores genéticos preditos, o que indica que ambos os modelos podem ser utilizados em programas de melhoramento genético.The objective of this work was to compare the estimates of genetic parameters obtained in single-trait and two-trait bayesian analyses, under linear and threshold animal models, considering categorical morphological traits of bovines of the Nelore breed. Data of musculature, physical structure and conformation were obtained between years 2000 and 2005, from 3,864 bovines of the Nelore breed from 13 participant farms of the Nelore Brazil Program. Single-trait and two-trait bayesian analyses were performed under linear and threshold animal models. In general, the linear and threshold

  8. The linearization method in hydrodynamical stability theory

    CERN Document Server

    Yudovich, V I

    1989-01-01

    This book presents the theory of the linearization method as applied to the problem of steady-state and periodic motions of continuous media. The author proves infinite-dimensional analogues of Lyapunov's theorems on stability, instability, and conditional stability for a large class of continuous media. In addition, semigroup properties for the linearized Navier-Stokes equations in the case of an incompressible fluid are studied, and coercivity inequalities and completeness of a system of small oscillations are proved.

  9. Detection thresholds of macaque otolith afferents.

    Science.gov (United States)

    Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E

    2012-06-13

    The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.

  10. A reciprocal theorem for a mixture theory. [development of linearized theory of interacting media

    Science.gov (United States)

    Martin, C. J.; Lee, Y. M.

    1972-01-01

    A dynamic reciprocal theorem for a linearized theory of interacting media is developed. The constituents of the mixture are a linear elastic solid and a linearly viscous fluid. In addition to Steel's field equations, boundary conditions and inequalities on the material constants that have been shown by Atkin, Chadwick and Steel to be sufficient to guarantee uniqueness of solution to initial-boundary value problems are used. The elements of the theory are given and two different boundary value problems are considered. The reciprocal theorem is derived with the aid of the Laplace transform and the divergence theorem and this section is concluded with a discussion of the special cases which arise when one of the constituents of the mixture is absent.

  11. HERITABILITY AND BREEDING VALUE OF SHEEP FERTILITY ESTIMATED BY MEANS OF THE GIBBS SAMPLING METHOD USING THE LINEAR AND THRESHOLD MODELS

    Directory of Open Access Journals (Sweden)

    DARIUSZ Piwczynski

    2013-03-01

    Full Text Available The research was carried out on 4,030 Polish Merino ewes born in the years 1991- 2001, kept in 15 flocks from the Pomorze and Kujawy region. Fertility of ewes in subsequent reproduction seasons was analysed with the use of multiple logistic regression. The research showed that there is a statistical influence of the flock, year of birth, age of dam, flock year interaction of birth on the ewes fertility. In order to estimate the genetic parameters, the Gibbs sampling method was applied, using the univariate animal models, both linear as well as threshold. Estimates of fertility depending on the model equalled 0.067 to 0.104, whereas the estimates of repeatability equalled respectively: 0.076 and 0.139. The obtained genetic parameters were then used to estimate the breeding values of the animals in terms of controlled trait (Best Linear Unbiased Prediction method using linear and threshold models. The obtained animal breeding values rankings in respect of the same trait with the use of linear and threshold models were strongly correlated with each other (rs = 0.972. Negative genetic trends of fertility (0.01-0.08% per year were found.

  12. The non-linear link between electricity consumption and temperature in Europe: A threshold panel approach

    Energy Technology Data Exchange (ETDEWEB)

    Bessec, Marie [CGEMP, Universite Paris-Dauphine, Place du Marechal de Lattre de Tassigny Paris (France); Fouquau, Julien [LEO, Universite d' Orleans, Faculte de Droit, d' Economie et de Gestion, Rue de Blois, BP 6739, 45067 Orleans Cedex 2 (France)

    2008-09-15

    This paper investigates the relationship between electricity demand and temperature in the European Union. We address this issue by means of a panel threshold regression model on 15 European countries over the last two decades. Our results confirm the non-linearity of the link between electricity consumption and temperature found in more limited geographical areas in previous studies. By distinguishing between North and South countries, we also find that this non-linear pattern is more pronounced in the warm countries. Finally, rolling regressions show that the sensitivity of electricity consumption to temperature in summer has increased in the recent period. (author)

  13. Azerbaijan Technical University’s Experience in Teaching Linear Electrical Circuit Theory

    Directory of Open Access Journals (Sweden)

    G. A. Mamedov

    2006-01-01

    Full Text Available An experience in teaching linear electrical circuit theory at the Azerbaijan Technical University is presented in the paper. The paper describes structure of the Linear Electrical Circuit Theory course worked out by the authors that contains a section on electrical calculation of track circuits, information on electro-magnetic compatibility and typical tests for better understanding of the studied subject.

  14. A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.

    Science.gov (United States)

    Freed, Alan D.; Leonov, Arkady I.

    2002-01-01

    The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.

  15. Particle linear theory on a self-gravitating perturbed cubic Bravais lattice

    International Nuclear Information System (INIS)

    Marcos, B.

    2008-01-01

    Discreteness effects are a source of uncontrolled systematic errors of N-body simulations, which are used to compute the evolution of a self-gravitating fluid. We have already developed the so-called ''particle linear theory''(PLT), which describes the evolution of the position of self-gravitating particles located on a perturbed simple cubic lattice. It is the discrete analogue of the well-known (Lagrangian) linear theory of a self-gravitating fluid. Comparing both theories permits us to quantify precisely discreteness effects in the linear regime. It is useful to develop the PLT also for other perturbed lattices because they represent different discretizations of the same continuous system. In this paper we detail how to implement the PLT for perturbed cubic Bravais lattices (simple, body, and face-centered) in a cubic simulation box. As an application, we will study the discreteness effects--in the linear regime--of N-body simulations for which initial conditions have been set up using these different lattices.

  16. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    Science.gov (United States)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  17. When is quasi-linear theory exact. [particle acceleration

    Science.gov (United States)

    Jones, F. C.; Birmingham, T. J.

    1975-01-01

    We use the cumulant expansion technique of Kubo (1962, 1963) to derive an integrodifferential equation for the average one-particle distribution function for particles being accelerated by electric and magnetic fluctuations of a general nature. For a very restricted class of fluctuations, the equation for this function degenerates exactly to a differential equation of Fokker-Planck type. Quasi-linear theory, including the adiabatic assumption, is an exact theory only for this limited class of fluctuations.

  18. Recent development of linear scaling quantum theories in GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Cheol Ho [Kyungpook National Univ., Daegu (Korea, Republic of)

    2003-06-01

    Linear scaling quantum theories are reviewed especially focusing on the method adopted in GAMESS. The three key translation equations of the fast multipole method (FMM) are deduced from the general polypolar expansions given earlier by Steinborn and Rudenberg. Simplifications are introduced for the rotation-based FMM that lead to a very compact FMM formalism. The OPS (optimum parameter searching) procedure, a stable and efficient way of obtaining the optimum set of FMM parameters, is established with complete control over the tolerable error {epsilon}. In addition, a new parallel FMM algorithm requiring virtually no inter-node communication, is suggested which is suitable for the parallel construction of Fock matrices in electronic structure calculations.

  19. Wigner's little group as a gauge generator in linearized gravity theories

    International Nuclear Information System (INIS)

    Scaria, Tomy; Chakraborty, Biswajit

    2002-01-01

    We show that the translational subgroup of Wigner's little group for massless particles in 3 + 1 dimensions generates gauge transformation in linearized Einstein gravity. Similarly, a suitable representation of the one-dimensional translational group T(1) is shown to generate gauge transformation in the linearized Einstein-Chern-Simons theory in 2 + 1 dimensions. These representations are derived systematically from appropriate representations of translational groups which generate gauge transformations in gauge theories living in spacetime of one higher dimension by the technique of dimensional descent. The unified picture thus obtained is compared with a similar picture available for vector gauge theories in 3 + 1 and 2 + 1 dimensions. Finally, the polarization tensor of the Einstein-Pauli-Fierz theory in 2 + 1 dimensions is shown to split into the polarization tensors of a pair of Einstein-Chern-Simons theories with opposite helicities suggesting a doublet structure for the Einstein-Pauli-Fierz theory

  20. A High Order Theory for Linear Thermoelastic Shells: Comparison with Classical Theories

    Directory of Open Access Journals (Sweden)

    V. V. Zozulya

    2013-01-01

    Full Text Available A high order theory for linear thermoelasticity and heat conductivity of shells has been developed. The proposed theory is based on expansion of the 3-D equations of theory of thermoelasticity and heat conductivity into Fourier series in terms of Legendre polynomials. The first physical quantities that describe thermodynamic state have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate. Thereby all equations of elasticity and heat conductivity including generalized Hooke's and Fourier's laws have been transformed to the corresponding equations for coefficients of the polynomial expansion. Then in the same way as in the 3D theories system of differential equations in terms of displacements and boundary conditions for Fourier coefficients has been obtained. First approximation theory is considered in more detail. The obtained equations for the first approximation theory are compared with the corresponding equations for Timoshenko's and Kirchhoff-Love's theories. Special case of plates and cylindrical shell is also considered, and corresponding equations in displacements are presented.

  1. Nonautonomous linear Hamiltonian systems oscillation, spectral theory and control

    CERN Document Server

    Johnson, Russell; Novo, Sylvia; Núñez, Carmen; Fabbri, Roberta

    2016-01-01

    This monograph contains an in-depth analysis of the dynamics given by a linear Hamiltonian system of general dimension with nonautonomous bounded and uniformly continuous coefficients, without other initial assumptions on time-recurrence. Particular attention is given to the oscillation properties of the solutions as well as to a spectral theory appropriate for such systems. The book contains extensions of results which are well known when the coefficients are autonomous or periodic, as well as in the nonautonomous two-dimensional case. However, a substantial part of the theory presented here is new even in those much simpler situations. The authors make systematic use of basic facts concerning Lagrange planes and symplectic matrices, and apply some fundamental methods of topological dynamics and ergodic theory. Among the tools used in the analysis, which include Lyapunov exponents, Weyl matrices, exponential dichotomy, and weak disconjugacy, a fundamental role is played by the rotation number for linear Hami...

  2. Linear control theory for gene network modeling.

    Science.gov (United States)

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  3. Direction detection thresholds of passive self-motion in artistic gymnasts.

    Science.gov (United States)

    Hartmann, Matthias; Haller, Katia; Moser, Ivan; Hossner, Ernst-Joachim; Mast, Fred W

    2014-04-01

    In this study, we compared direction detection thresholds of passive self-motion in the dark between artistic gymnasts and controls. Twenty-four professional female artistic gymnasts (ranging from 7 to 20 years) and age-matched controls were seated on a motion platform and asked to discriminate the direction of angular (yaw, pitch, roll) and linear (leftward-rightward) motion. Gymnasts showed lower thresholds for the linear leftward-rightward motion. Interestingly, there was no difference for the angular motions. These results show that the outstanding self-motion abilities in artistic gymnasts are not related to an overall higher sensitivity in self-motion perception. With respect to vestibular processing, our results suggest that gymnastic expertise is exclusively linked to superior interpretation of otolith signals when no change in canal signals is present. In addition, thresholds were overall lower for the older (14-20 years) than for the younger (7-13 years) participants, indicating the maturation of vestibular sensitivity from childhood to adolescence.

  4. Hydrodynamic theory for quantum plasmonics: Linear-response dynamics of the inhomogeneous electron gas

    DEFF Research Database (Denmark)

    Yan, Wei

    2015-01-01

    We investigate the hydrodynamic theory of metals, offering systematic studies of the linear-response dynamics for an inhomogeneous electron gas. We include the quantum functional terms of the Thomas-Fermi kinetic energy, the von Weizsa¨cker kinetic energy, and the exchange-correlation Coulomb...... energies under the local density approximation. The advantages, limitations, and possible improvements of the hydrodynamic theory are transparently demonstrated. The roles of various parameters in the theory are identified. We anticipate that the hydrodynamic theory can be applied to investigate the linear...... response of complex metallic nanostructures, including quantum effects, by adjusting theory parameters appropriately....

  5. No evidence for a critical salinity threshold for growth and reproduction in the freshwater snail Physa acuta

    International Nuclear Information System (INIS)

    Kefford, Ben J.; Nugegoda, Dayanthi

    2005-01-01

    The growth and reproduction of the freshwater snail Physa acuta (Gastropoda: Physidae) were measured at various salinity levels (growth: distilled water, 50, 100, 500, 1000 and 5000 μS/cm; reproduction: deionized water, 100, 500, 1000 and 3000 μS/cm) established using the artificial sea salt, Ocean Nature. This was done to examine the assumption that there is no direct effect of salinity on freshwater animals until a threshold, beyond which sub-lethal effects, such as reduction in growth and reproduction, will occur. Growth of P. acuta was maximal in terms of live and dry mass at salinity levels 500-1000 μS/cm. The number of eggs produced per snail per day was maximal between 100 and 1000 μS/cm. Results show that rather than a threshold response to salinity, small rises in salinity (from low levels) can produce increased growth and reproduction until a maximum is reached. Beyond this salinity, further increases result in a decrease in growth and reproduction. Studies on the growth of freshwater invertebrates and fish have generally shown a similar lack of a threshold response. The implications for assessing the effects of salinisation on freshwater organisms need to be further considered. - Responses of snails to increasing salinity were non-linear

  6. No-threshold dose-response curves for nongenotoxic chemicals: Findings and applications for risk assessment

    International Nuclear Information System (INIS)

    Sheehan, Daniel M.

    2006-01-01

    We tested the hypothesis that no threshold exists when estradiol acts through the same mechanism as an active endogenous estrogen. A Michaelis-Menten (MM) equation accounting for response saturation, background effects, and endogenous estrogen level fit a turtle sex-reversal data set with no threshold and estimated the endogenous dose. Additionally, 31 diverse literature dose-response data sets were analyzed by adding a term for nonhormonal background; good fits were obtained but endogenous dose estimations were not significant due to low resolving power. No thresholds were observed. Data sets were plotted using a normalized MM equation; all 178 data points were accommodated on a single graph. Response rates from ∼1% to >95% were well fit. The findings contradict the threshold assumption and low-dose safety. Calculating risk and assuming additivity of effects from multiple chemicals acting through the same mechanism rather than assuming a safe dose for nonthresholded curves is appropriate

  7. Nonlinear modulation near the Lighthill instability threshold in 2+1 Whitham theory

    Science.gov (United States)

    Bridges, Thomas J.; Ratliff, Daniel J.

    2018-04-01

    The dispersionless Whitham modulation equations in 2+1 (two space dimensions and time) are reviewed and the instabilities identified. The modulation theory is then reformulated, near the Lighthill instability threshold, with a slow phase, moving frame and different scalings. The resulting nonlinear phase modulation equation near the Lighthill surfaces is a geometric form of the 2+1 two-way Boussinesq equation. This equation is universal in the same sense as Whitham theory. Moreover, it is dispersive, and it has a wide range of interesting multi-periodic, quasi-periodic and multi-pulse localized solutions. For illustration the theory is applied to a complex nonlinear 2+1 Klein-Gordon equation which has two Lighthill surfaces in the manifold of periodic travelling waves. This article is part of the theme issue `Stability of nonlinear waves and patterns and related topics'.

  8. Clifford Algebras and Spinorial Representation of Linear Canonical Transformations in Quantum Theory

    International Nuclear Information System (INIS)

    Raoelina Andriambololona; Ranaivoson, R.T.R.; Rakotoson, H.

    2017-11-01

    This work is a continuation of previous works that we have done concerning linear canonical transformations and a phase space representation of quantum theory. It is mainly focused on the description of an approach which permits to establish spinorial representation of linear canonical transformations. It begins with an introduction section in which the reason and context of the content are discussed. The introduction section is followed by a brief recall about Clifford algebra and spin group. The description of the approach is started with the presentation of an adequate parameterization of linear canonical transformations which permits to represent them with special pseudo-orthogonal transformations in an operators space. The establishment of the spinorial representation is deduced using relation between special pseudo-orthogonal groups and spin groups. The cases of one dimension quantum mechanics and general multidimensional theory are both studied. The case of linear canonical transformation related to Minkowski space is particularly studied and it is shown that Lorentz transformation may be considered as particular case of linear canonical transformation. Some results from the spinorial representation are also exploited to define operators which may be used to establish equations for fields if one considers the possibility of envisaging a field theory which admits as main symmetry group the group constituted by linear canonical transformations.

  9. Linear control theory for gene network modeling.

    Directory of Open Access Journals (Sweden)

    Yong-Jun Shin

    Full Text Available Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain and linear state-space (time domain can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  10. Plane answers to complex questions the theory of linear models

    CERN Document Server

    Christensen, Ronald

    1987-01-01

    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  11. A simple theory of linear mode conversion

    International Nuclear Information System (INIS)

    Cairns, R.A.; Lashmore-Davies, C.N.; Woods, A.M.

    1984-01-01

    A summary is given of the basic theory of linear mode conversion involving the construction of differential equations for the mode amplitudes based on the properties of the dispersion relation in the neighbourhood of the mode conversion point. As an example the transmission coefficient for tunneling from the upper hybrid resonance through the evanescent region to the adjacent cut-off is treated. 7 refs, 3 figs

  12. A quantum-mechanical perspective on linear response theory within polarizable embedding

    DEFF Research Database (Denmark)

    List, Nanna Holmgaard; Norman, Patrick; Kongsted, Jacob

    2017-01-01

    We present a derivation of linear response theory within polarizable embedding starting from a rigorous quantum-mechanical treatment of a composite system. To this aim, two different subsystem decompositions (symmetric and nonsymmetric) of the linear response function are introduced and the pole...

  13. On the stimulated Raman sidescattering in inhomogeneous plasmas: revisit of linear theory and three-dimensional particle-in-cell simulations

    Science.gov (United States)

    Xiao, C. Z.; Zhuo, H. B.; Yin, Y.; Liu, Z. J.; Zheng, C. Y.; Zhao, Y.; He, X. T.

    2018-02-01

    Stimulated Raman sidescattering (SRSS) in inhomogeneous plasma is comprehensively revisited on both theoretical and numerical aspects due to the increasing concern of its detriments to inertial confinement fusion. Firstly, two linear mechanisms of finite beam width and collisional effects that could suppress SRSS are investigated theoretically. Thresholds for the eigenmode and wave packet in a finite-width beam are derived as a supplement to the theory proposed by Mostrom and Kaufman (1979 Phys. Rev. Lett. 42 644). Collisional absorption of SRSS is efficient at high-density plasma and high-Z material, otherwise, it allows emission of sidescattering. Secondly, we have performed the first three-dimensional particle-in-cell simulations in the context of SRSS to investigate its linear and nonlinear effects. Simulation results are qualitatively agreed with the linear theory. SRSS with the maximum growth gain is excited at various densities, grows to an amplitude that is comparable with the pump laser, and evolutes to lower densities with a large angle of emergence. Competitions between SRSS and other parametric instabilities such as stimulated Raman backscattering, two-plasmon decay, and stimulated Brillouin scattering are discussed. These interaction processes are determined by gains, occurrence sites, scattering geometries of each instability, and will affect subsequent evolutions. Nonlinear effects of self-focusing and azimuthal magnetic field generation are observed to be accompanied with SRSS. In addition, it is found that SRSS is insensitive to ion motion, collision (low-Z material), and electron temperature.

  14. A Qualitative Linear Utility Theory for Spohn's Theory of Epistemic Beliefs

    OpenAIRE

    Giang, Phan H.; Shenoy, Prakash P.

    2013-01-01

    In this paper, we formulate a qualitative "linear" utility theory for lotteries in which uncertainty is expressed qualitatively using a Spohnian disbelief function. We argue that a rational decision maker facing an uncertain decision problem in which the uncertainty is expressed qualitatively should behave so as to maximize "qualitative expected utility." Our axiomatization of the qualitative utility is similar to the axiomatization developed by von Neumann and Morgenstern for probabilistic l...

  15. Near-Threshold Ionization of Argon by Positron Impact

    Science.gov (United States)

    Babij, T. J.; Machacek, J. R.; Murtagh, D. J.; Buckman, S. J.; Sullivan, J. P.

    2018-03-01

    The direct single-ionization cross section for Ar by positron impact has been measured in the region above the first ionization threshold. These measurements are compared to semiclassical calculations which give rise to a power law variation of the cross section in the threshold region. The experimental results appear to be in disagreement with extensions to the Wannier theory applied to positron impact ionization, with a smaller exponent than that calculated by most previous works. In fact, in this work, we see no difference in threshold behavior between the positron and electron cases. Possible reasons for this discrepancy are discussed.

  16. Primer on theory and operation of linear accelerators in radiation therapy

    International Nuclear Information System (INIS)

    Karzmark, C.J.; Morton, R.J.

    1981-12-01

    This primer is part of an educational package that also includes a series of 3 videotapes entitled Theory and Operation of Linear Accelerators in Radiation Therapy, Parts I, II, and III. This publication provides an overview of the components of the linear accelerator and how they function and interrelate. The auxiliary systems necessary to maintain the operation of the linear accelerator are also described

  17. Linearized analysis of (2+1)-dimensional Einstein-Maxwell theory

    International Nuclear Information System (INIS)

    Soda, Jiro.

    1989-08-01

    On the basis of previous result by Hosoya and Nakao that (2+1)-dimensional gravity reduces the geodesic motion in moduli space, we investigate the effects of matter fields on the geodesic motion using the linearized theory. It is shown that the transverse-traceless parts of energy-momentum tensor make the deviation from the geodesic motion. This result is important for the Einstein-Maxwell theory due to the existence of global modes of Maxwell fields on torus. (author)

  18. A non-linear theory of strong interactions

    International Nuclear Information System (INIS)

    Skyrme, T.H.R.

    1994-01-01

    A non-linear theory of mesons, nucleons and hyperons is proposed. The three independent fields of the usual symmetrical pseudo-scalar pion field are replaced by the three directions of a four-component field vector of constant length, conceived in an Euclidean four-dimensional isotopic spin space. This length provides the universal scaling factor, all other constants being dimensionless; the mass of the meson field is generated by a φ 4 term; this destroys the continuous rotation group in the iso-space, leaving a 'cubic' symmetry group. Classification of states by this group introduces quantum numbers corresponding to isotopic spin and to 'strangeness'; one consequences is that, at least in elementary interactions, charge is only conserved module 4. Furthermore, particle states have not a well-defined parity, but parity is effectively conserved for meson-nucleon interactions. A simplified model, using only two dimensions of space and iso-space, is considered further; the non-linear meson field has solutions with particle character, and an indication is given of the way in which the particle field variables might be introduced as collective co-ordinates describing the dynamics of these particular solutions of the meson field equations, suggesting a unified theory based on the meson field alone. (author). 7 refs

  19. Comparison of Linear Induction Motor Theories for the LIMRV and TLRV Motors

    Science.gov (United States)

    1978-01-01

    The Oberretl, Yamamura, and Mosebach theories of the linear induction motor are described and also applied to predict performance characteristics of the TLRV & LIMRV linear induction motors. The effect of finite motor width and length on performance ...

  20. Asymptotic solutions and spectral theory of linear wave equations

    International Nuclear Information System (INIS)

    Adam, J.A.

    1982-01-01

    This review contains two closely related strands. Firstly the asymptotic solution of systems of linear partial differential equations is discussed, with particular reference to Lighthill's method for obtaining the asymptotic functional form of the solution of a scalar wave equation with constant coefficients. Many of the applications of this technique are highlighted. Secondly, the methods and applications of the theory of the reduced (one-dimensional) wave equation - particularly spectral theory - are discussed. While the breadth of application and power of the techniques is emphasised throughout, the opportunity is taken to present to a wider readership, developments of the methods which have occured in some aspects of astrophysical (particularly solar) and geophysical fluid dynamics. It is believed that the topics contained herein may be of relevance to the applied mathematician or theoretical physicist interest in problems of linear wave propagation in these areas. (orig./HSI)

  1. Applying Threshold Concepts Theory to an Unsettled Field: An Exploratory Study in Criminal Justice Education

    Science.gov (United States)

    Wimshurst, Kerry

    2011-01-01

    Criminal justice education is a relatively new program in higher education in many countries, and its curriculum and parameters remain unsettled. An exploratory study investigated whether threshold concepts theory provided a useful lens by which to explore student understandings of this multidisciplinary field. Eight high-performing final-year…

  2. Canonical-ensemble extended Lagrangian Born-Oppenheimer molecular dynamics for the linear scaling density functional theory.

    Science.gov (United States)

    Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi

    2017-10-11

    We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.

  3. Linear and nonlinear instability theory of a noble gas MHD generator

    International Nuclear Information System (INIS)

    Mesland, A.J.

    1982-01-01

    This thesis deals with the stability of the working medium of a seeded noble gas magnetohydrodynamic generator. The aim of the study is to determine the instability mechanism which is most likely to occur in experimental MHD generators and to describe its behaviour with linear and nonlinear theories. In chapter I a general introduction is given. The pertinent macroscopic basic equations are derived in chapter II, viz. the continuity, the momentum and the energy equation for the electrons and the heavy gas particles, consisting of the seed particles and the noble gas atoms. Chapter III deals with the linear plane wave analysis of small disturbances of a homogeneous steady state. The steady state is discussed in chapter IV. The values for the steady state parameters used for the calculations both for the linear analysis as for the nonlinear analysis are made plausible with the experimental values. Based on the results of the linear plane wave theory a nonlinear plane wave model of the electrothermal instability is introduced in chapter V. (Auth.)

  4. The energy-momentum tensor for the linearized Maxwell-Vlasov and kinetic guiding center theories

    International Nuclear Information System (INIS)

    Pfirsch, D.; Morrison, P.J.; Texas Univ., Austin

    1990-02-01

    A modified Hamilton-Jacobi formalism is introduced as a tool to obtain the energy-momentum and angular-momentum tensors for any kind of nonlinear or linearized Maxwell-collisionless kinetic theories. The emphasis is on linearized theories, for which these tensors are derived for the first time. The kinetic theories treated - which need not be the same for all particle species in a plasma - are the Vlasov and kinetic guiding center theories. The Hamiltonian for the guiding center motion is taken in the form resulting from Dirac's constraint theory for non-standard Lagrangian systems. As an example of the Maxwell-kinetic guiding center theory, the second-order energy for a perturbed homogeneous magnetized plasma is calculated with initially vanishing field perturbations. The expression obtained is compared with the corresponding one of Maxwell-Vlasov theory. (orig.)

  5. The energy-momentum tensor for the linearized Maxwell-Vlasov and kinetic guiding center theories

    International Nuclear Information System (INIS)

    Pfirsch, D.; Morrison, P.J.

    1990-02-01

    A modified Hamilton-Jacobi formalism is introduced as a tool to obtain the energy-momentum and angular-momentum tensors for any king of nonlinear or linearized Maxwell-collisionless kinetic theories. The emphasis is on linearized theories, for which these tensors are derived for the first time. The kinetic theories treated --- which need not be the same for all particle species in a plasma --- are the Vlasov and kinetic guiding center theories. The Hamiltonian for the guiding center motion is taken in the form resulting from Dirac's constraint theory for non-standard Lagrangian systems. As an example of the Maxwell-kinetic guiding center theory, the second-order energy for a perturbed homogeneous magnetized plasma is calculated with initially vanishing field perturbations. The expression obtained is compared with the corresponding one of Maxwell-Vlasov theory. 11 refs

  6. Quantum theory of enhanced unimolecular reaction rates below the ergodicity threshold

    International Nuclear Information System (INIS)

    Leitner, David M.; Wolynes, Peter G.

    2006-01-01

    A variety of unimolecular reactions exhibit measured rates that exceed Rice-Ramsperger-Kassel-Marcus (RRKM) predictions. We show using the local random matrix theory (LRMT) of vibrational energy flow how the quantum localization of the vibrational states of a molecule, by violating the ergodicity assumption, can give rise to such an enhancement of the apparent reaction rate. We present an illustrative calculation using LRMT for a model 12-vibrational mode organic molecule to show that below the ergodicity threshold the reaction rate may exceed many times the RRKM prediction due to quantum localization of vibrational states

  7. Linear kinetic theory and particle transport in stochastic mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Pomraning, G.C. [Univ. of California, Los Angeles, CA (United States)

    1995-12-31

    We consider the formulation of linear transport and kinetic theory describing energy and particle flow in a random mixture of two or more immiscible materials. Following an introduction, we summarize early and fundamental work in this area, and we conclude with a brief discussion of recent results.

  8. Linear response theory an analytic-algebraic approach

    CERN Document Server

    De Nittis, Giuseppe

    2017-01-01

    This book presents a modern and systematic approach to Linear Response Theory (LRT) by combining analytic and algebraic ideas. LRT is a tool to study systems that are driven out of equilibrium by external perturbations. In particular the reader is provided with a new and robust tool to implement LRT for a wide array of systems. The proposed formalism in fact applies to periodic and random systems in the discrete and the continuum. After a short introduction describing the structure of the book, its aim and motivation, the basic elements of the theory are presented in chapter 2. The mathematical framework of the theory is outlined in chapters 3–5: the relevant von Neumann algebras, noncommutative $L^p$- and Sobolev spaces are introduced; their construction is then made explicit for common physical systems; the notion of isopectral perturbations and the associated dynamics are studied. Chapter 6 is dedicated to the main results, proofs of the Kubo and Kubo-Streda formulas. The book closes with a chapter about...

  9. Flow fields in the supersonic through-flow fan. Comparison of the solutions of the linear potential theory and the numerical solution of the Euler equations; Choonsoku tsukaryu fan nai no nagareba. Senkei potential rironkai to Euler hoteishiki no suchikai no hikaku

    Energy Technology Data Exchange (ETDEWEB)

    Yamasaki, N; Nanba, M; Tashiro, K [Kyushu University, Fukuoka (Japan). Faculty of Engineering

    1996-03-27

    Comparison study between solutions of a linear potential theory and numerical solution of Euler equations was made for flow in a supersonic through-flow fan. In numerical fluid dynamic technique, Euler equations are solved by finite difference method under the assumption of air and perfect gas fluid, and neglected viscosity and thermal conductivity of fluid. As a result, in a linear potential theory, expansion wave was regarded as equipotential discontinuous surface, while in Euler numerical solution, it was regarded as finite pressure gradient where a wave front fans out toward downstream. The latter reflection point of shock wave on a wing existed upstream as compared with the former reflection point. The shock wave angle was dominated by Euler equations, and different from the Mach line of a linear potential theory in both angle and discontinuous quantities in front and behind. Both calculated solutions well agreed with each other until the first reflection point of the Mach line, however, thereafter the difference between them increased toward downstream. 5 refs., 5 figs., 1 tab.

  10. Gradient-driven flux-tube simulations of ion temperature gradient turbulence close to the non-linear threshold

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, A. G.; Rath, F.; Buchholz, R.; Grosshauser, S. R.; Strintzi, D.; Weikl, A. [Physics Department, University of Bayreuth, Universitätsstrasse 30, Bayreuth (Germany); Camenen, Y. [Aix Marseille Univ, CNRS, PIIM, UMR 7345, Marseille (France); Candy, J. [General Atomics, PO Box 85608, San Diego, California 92186-5608 (United States); Casson, F. J. [CCFE, Culham Science Centre, Abingdon OX14 3DB, Oxon (United Kingdom); Hornsby, W. A. [Max Planck Institut für Plasmaphysik, Boltzmannstrasse 2 85748 Garching (Germany)

    2016-08-15

    It is shown that Ion Temperature Gradient turbulence close to the threshold exhibits a long time behaviour, with smaller heat fluxes at later times. This reduction is connected with the slow growth of long wave length zonal flows, and consequently, the numerical dissipation on these flows must be sufficiently small. Close to the nonlinear threshold for turbulence generation, a relatively small dissipation can maintain a turbulent state with a sizeable heat flux, through the damping of the zonal flow. Lowering the dissipation causes the turbulence, for temperature gradients close to the threshold, to be subdued. The heat flux then does not go smoothly to zero when the threshold is approached from above. Rather, a finite minimum heat flux is obtained below which no fully developed turbulent state exists. The threshold value of the temperature gradient length at which this finite heat flux is obtained is up to 30% larger compared with the threshold value obtained by extrapolating the heat flux to zero, and the cyclone base case is found to be nonlinearly stable. Transport is subdued when a fully developed staircase structure in the E × B shearing rate forms. Just above the threshold, an incomplete staircase develops, and transport is mediated by avalanche structures which propagate through the marginally stable regions.

  11. Product state resolved excitation spectroscopy of He-, Ne-, and Ar-Br2 linear isomers: experiment and theory.

    Science.gov (United States)

    Pio, Jordan M; van der Veer, Wytze E; Bieler, Craig R; Janda, Kenneth C

    2008-04-07

    Valence excitation spectra for the linear isomers of He-, Ne-, and Ar-Br2 are reported and compared to a two-dimensional simulation using the currently available potential energy surfaces. Excitation spectra from the ground electronic state to the region of the inner turning point of the Rg-Br2 (B,nu') stretching coordinate are recorded while probing the asymptotic Br2 (B,nu') state. Each spectrum is a broad continuum extending over hundreds of wavenumbers, becoming broader and more blueshifted as the rare gas atom is changed from He to Ne to Ar. In the case of Ne-Br2, the threshold for producing the asymptotic product state reveals the X-state linear isomer bond energy to be 71+/-3 cm(-1). The qualitative agreement between experiment and theory shows that the spectra can be correctly regarded as revealing the one-atom solvent shifts and also provides new insight into the one-atom cage effect on the halogen vibrational relaxation. The measured spectra provide data to test future ab initio potential energy surfaces in the interaction of rare gas atoms with the halogen valence excited state.

  12. Spectral theory of linear operators and spectral systems in Banach algebras

    CERN Document Server

    Müller, Vladimir

    2003-01-01

    This book is dedicated to the spectral theory of linear operators on Banach spaces and of elements in Banach algebras. It presents a survey of results concerning various types of spectra, both of single and n-tuples of elements. Typical examples are the one-sided spectra, the approximate point, essential, local and Taylor spectrum, and their variants. The theory is presented in a unified, axiomatic and elementary way. Many results appear here for the first time in a monograph. The material is self-contained. Only a basic knowledge of functional analysis, topology, and complex analysis is assumed. The monograph should appeal both to students who would like to learn about spectral theory and to experts in the field. It can also serve as a reference book. The present second edition contains a number of new results, in particular, concerning orbits and their relations to the invariant subspace problem. This book is dedicated to the spectral theory of linear operators on Banach spaces and of elements in Banach alg...

  13. Theory of linear operators in Hilbert space

    CERN Document Server

    Akhiezer, N I

    1993-01-01

    This classic textbook by two mathematicians from the USSR's prestigious Kharkov Mathematics Institute introduces linear operators in Hilbert space, and presents in detail the geometry of Hilbert space and the spectral theory of unitary and self-adjoint operators. It is directed to students at graduate and advanced undergraduate levels, but because of the exceptional clarity of its theoretical presentation and the inclusion of results obtained by Soviet mathematicians, it should prove invaluable for every mathematician and physicist. 1961, 1963 edition.

  14. From 6D superconformal field theories to dynamic gauged linear sigma models

    Science.gov (United States)

    Apruzzi, Fabio; Hassler, Falk; Heckman, Jonathan J.; Melnikov, Ilarion V.

    2017-09-01

    Compactifications of six-dimensional (6D) superconformal field theories (SCFTs) on four- manifolds generate a large class of novel two-dimensional (2D) quantum field theories. We consider in detail the case of the rank-one simple non-Higgsable cluster 6D SCFTs. On the tensor branch of these theories, the gauge group is simple and there are no matter fields. For compactifications on suitably chosen Kähler surfaces, we present evidence that this provides a method to realize 2D SCFTs with N =(0 ,2 ) supersymmetry. In particular, we find that reduction on the tensor branch of the 6D SCFT yields a description of the same 2D fixed point that is described in the UV by a gauged linear sigma model (GLSM) in which the parameters are promoted to dynamical fields, that is, a "dynamic GLSM" (DGLSM). Consistency of the model requires the DGLSM to be coupled to additional non-Lagrangian sectors obtained from reduction of the antichiral two-form of the 6D theory. These extra sectors include both chiral and antichiral currents, as well as spacetime filling noncritical strings of the 6D theory. For each candidate 2D SCFT, we also extract the left- and right-moving central charges in terms of data of the 6D SCFT and the compactification manifold.

  15. Turnpike theory of continuous-time linear optimal control problems

    CERN Document Server

    Zaslavski, Alexander J

    2015-01-01

    Individual turnpike results are of great interest due to their numerous applications in engineering and in economic theory; in this book the study is focused on new results of turnpike phenomenon in linear optimal control problems.  The book is intended for engineers as well as for mathematicians interested in the calculus of variations, optimal control, and in applied functional analysis. Two large classes of problems are studied in more depth. The first class studied in Chapter 2 consists of linear control problems with periodic nonsmooth convex integrands. Chapters 3-5 consist of linear control problems with autonomous nonconvex and nonsmooth integrands.  Chapter 6 discusses a turnpike property for dynamic zero-sum games with linear constraints. Chapter 7 examines genericity results. In Chapter 8, the description of structure of variational problems with extended-valued integrands is obtained. Chapter 9 ends the exposition with a study of turnpike phenomenon for dynamic games with extended value integran...

  16. Janus field theories from non-linear BF theories for multiple M2-branes

    International Nuclear Information System (INIS)

    Ryang, Shijong

    2009-01-01

    We integrate the nonpropagating B μ gauge field for the non-linear BF Lagrangian describing N M2-branes which includes terms with even number of the totally antisymmetric tensor M IJK in arXiv:0808.2473 and for the two-types of non-linear BF Lagrangians which include terms with odd number of M IJK as well in arXiv:0809:0985. For the former Lagrangian we derive directly the DBI-type Lagrangian expressed by the SU(N) dynamical A μ gauge field with a spacetime dependent coupling constant, while for the low-energy expansions of the latter Lagrangians the B μ integration is iteratively performed. The derived Janus field theory Lagrangians are compared.

  17. Symmetric linear systems - An application of algebraic systems theory

    Science.gov (United States)

    Hazewinkel, M.; Martin, C.

    1983-01-01

    Dynamical systems which contain several identical subsystems occur in a variety of applications ranging from command and control systems and discretization of partial differential equations, to the stability augmentation of pairs of helicopters lifting a large mass. Linear models for such systems display certain obvious symmetries. In this paper, we discuss how these symmetries can be incorporated into a mathematical model that utilizes the modern theory of algebraic systems. Such systems are inherently related to the representation theory of algebras over fields. We will show that any control scheme which respects the dynamical structure either implicitly or explicitly uses the underlying algebra.

  18. Linear response theory of activated surface diffusion with interacting adsorbates

    Energy Technology Data Exchange (ETDEWEB)

    Marti' nez-Casado, R. [Department of Chemistry, Imperial College London, South Kensington, London SW7 2AZ (United Kingdom); Sanz, A.S.; Vega, J.L. [Instituto de Fi' sica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 123, 28006 Madrid (Spain); Rojas-Lorenzo, G. [Instituto Superior de Tecnologi' as y Ciencias Aplicadas, Ave. Salvador Allende, esq. Luaces, 10400 La Habana (Cuba); Instituto de Fi' sica Fundamental, Consejo Superior de Investigaciones Cienti' ficas, Serrano 123, 28006 Madrid (Spain); Miret-Artes, S., E-mail: s.miret@imaff.cfmac.csic.es [Instituto de Fi' sica Fundamental, Consejo Superior de Investigaciones Cienti' ficas, Serrano 123, 28006 Madrid (Spain)

    2010-05-12

    Graphical abstract: Activated surface diffusion with interacting adsorbates is analyzed within the Linear Response Theory framework. The so-called interacting single adsorbate model is justified by means of a two-bath model, where one harmonic bath takes into account the interaction with the surface phonons, while the other one describes the surface coverage, this leading to defining a collisional friction. Here, the corresponding theory is applied to simple systems, such as diffusion on flat surfaces and the frustrated translational motion in a harmonic potential. Classical and quantum closed formulas are obtained. Furthermore, a more realistic problem, such as atomic Na diffusion on the corrugated Cu(0 0 1) surface, is presented and discussed within the classical context as well as within the framework of Kramer's theory. Quantum corrections to the classical results are also analyzed and discussed. - Abstract: Activated surface diffusion with interacting adsorbates is analyzed within the Linear Response Theory framework. The so-called interacting single adsorbate model is justified by means of a two-bath model, where one harmonic bath takes into account the interaction with the surface phonons, while the other one describes the surface coverage, this leading to defining a collisional friction. Here, the corresponding theory is applied to simple systems, such as diffusion on flat surfaces and the frustrated translational motion in a harmonic potential. Classical and quantum closed formulas are obtained. Furthermore, a more realistic problem, such as atomic Na diffusion on the corrugated Cu(0 0 1) surface, is presented and discussed within the classical context as well as within the framework of Kramer's theory. Quantum corrections to the classical results are also analyzed and discussed.

  19. Linear-response theory of Coulomb drag in coupled electron systems

    DEFF Research Database (Denmark)

    Flensberg, Karsten; Hu, Ben Yu-Kuang; Jauho, Antti-Pekka

    1995-01-01

    We report a fully microscopic theory for the transconductivity, or, equivalently, the momentum transfer rate, of Coulomb coupled electron systems. We use the Kubo linear-response formalism and our main formal result expresses the transconductivity in terms of two fluctuation diagrams, which...

  20. Linear theory of equatorial spread F

    International Nuclear Information System (INIS)

    Hudson, M.K.; Kennel, C.F.

    1975-01-01

    A fluid dispersion relation for the drift and interchange (Rayleigh-Taylor) modes in a collisional plasma forms the basis for a linear theory of equatorial spread F. The collisional drift mode growth rate will exceed the growth rate of the Rayleigh-Taylor mode at short perpendicular wavelengths and density gradient scale lengths, and the drift mode can grow on top side as well as on bottom side density gradients. However, below the F peak, where spread F predominates, it is concluded that both the drift and the Rayleigh-Taylor modes contribute to the total spread F spectrum, the Rayleigh-Taylor mode dominating at long and the drift mode at short perpendicular wavelengths above the ion Larmor radius

  1. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  2. Linearly Polarized IR Spectroscopy Theory and Applications for Structural Analysis

    CERN Document Server

    Kolev, Tsonko

    2011-01-01

    A technique that is useful in the study of pharmaceutical products and biological molecules, polarization IR spectroscopy has undergone continuous development since it first emerged almost 100 years ago. Capturing the state of the science as it exists today, "Linearly Polarized IR Spectroscopy: Theory and Applications for Structural Analysis" demonstrates how the technique can be properly utilized to obtain important information about the structure and spectral properties of oriented compounds. The book starts with the theoretical basis of linear-dichroic infrared (IR-LD) spectroscop

  3. Primordial black holes in linear and non-linear regimes

    Energy Technology Data Exchange (ETDEWEB)

    Allahyari, Alireza; Abolhasani, Ali Akbar [Department of Physics, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2017-06-01

    We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we argue that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.

  4. Application of linear programming and perturbation theory in optimization of fuel utilization in a nuclear reactor

    International Nuclear Information System (INIS)

    Zavaljevski, N.

    1985-01-01

    Proposed optimization procedure is fast due to application of linear programming. Non-linear constraints which demand iterative application of linear programming are slowing down the calculation. Linearization can be done by different procedures starting from simple empirical rules for fuel in-core management to complicated general perturbation theory with higher order of corrections. A mathematical model was formulated for optimization of improved fuel cycle. A detailed algorithm for determining minimum of fresh fuel at the beginning of each fuel cycle is shown and the problem is linearized by first order perturbation theory and it is optimized by linear programming. Numerical illustration of the proposed method was done for the experimental reactor mostly for saving computer time

  5. Modelos lineares e não lineares da curva de Phillips para previsão da taxa de inflação no Brasil

    Directory of Open Access Journals (Sweden)

    Elano Ferreira Arruda

    2011-09-01

    Full Text Available Este trabalho compara previsões da taxa de inflação mensal brasileira a partir de diferentes modelos lineares e não lineares de séries temporais e da curva de Phillips. Em geral, os modelos não lineares apresentaram um melhor desempenho preditivo. Um modelo VAR produziu o menor erro quadrático médio de previsão (EQM entre os modelos lineares, enquanto as melhores previsões, entre todos os modelos, foram geradas pela curva de Phillips ampliada com threshold, a qual apresentou um EQM 20% menor do que a do modelo VAR. Essa diferença é significante de acordo com o teste de Diebold e Mariano (1995.

  6. A dynamical theory for linearized massive superspin 3/2

    International Nuclear Information System (INIS)

    Gates, James S. Jr.; Koutrolikos, Konstantinos

    2014-01-01

    We present a new theory of free massive superspin Y=3/2 irreducible representation of the 4D, N=1 Super-Poincaré group, which has linearized non-minimal supergravity (superhelicity Y=3/2) as it’s massless limit. The new results will illuminate the underlying structure of auxiliary superfields required for the description of higher massive superspin systems

  7. System theory as applied differential geometry. [linear system

    Science.gov (United States)

    Hermann, R.

    1979-01-01

    The invariants of input-output systems under the action of the feedback group was examined. The approach used the theory of Lie groups and concepts of modern differential geometry, and illustrated how the latter provides a basis for the discussion of the analytic structure of systems. Finite dimensional linear systems in a single independent variable are considered. Lessons of more general situations (e.g., distributed parameter and multidimensional systems) which are increasingly encountered as technology advances are presented.

  8. Non-cooperative stochastic differential game theory of generalized Markov jump linear systems

    CERN Document Server

    Zhang, Cheng-ke; Zhou, Hai-ying; Bin, Ning

    2017-01-01

    This book systematically studies the stochastic non-cooperative differential game theory of generalized linear Markov jump systems and its application in the field of finance and insurance. The book is an in-depth research book of the continuous time and discrete time linear quadratic stochastic differential game, in order to establish a relatively complete framework of dynamic non-cooperative differential game theory. It uses the method of dynamic programming principle and Riccati equation, and derives it into all kinds of existence conditions and calculating method of the equilibrium strategies of dynamic non-cooperative differential game. Based on the game theory method, this book studies the corresponding robust control problem, especially the existence condition and design method of the optimal robust control strategy. The book discusses the theoretical results and its applications in the risk control, option pricing, and the optimal investment problem in the field of finance and insurance, enriching the...

  9. Non-linear electrodynamics in Kaluza-Klein theory

    International Nuclear Information System (INIS)

    Kerner, R.

    1987-01-01

    The most general variational principle based on the invariants of the Riemann tensor and leading to the second order differential equations should contain, in dimensions higher than four, the invariants of the Gauss-Bonnet type. In five dimensions the lagrangian should be a linear combination of the scalar curvature and the second-order invariant. The equations of the electromagnetic field are derived in the absence of scalar and gravitational fields of the Kaluza-Klein model. They yield the unique extension of Maxwell's system in the Kaluza-Klein theory. Some properties of eventual solutions are discussed [fr

  10. Non-Linear Wave Loads and Ship responses by a time-domain Strip Theory

    DEFF Research Database (Denmark)

    Xia, Jinzhu; Wang, Zhaohui; Jensen, Jørgen Juncher

    1998-01-01

    . Based on this time-domain strip theory, an efficient non-linear hyroelastic method of wave- and slamming-induced vertical motions and structural responses of ships is developed, where the structure is represented by the Timoshenko beam theory. Numerical calculations are presented for the S175...

  11. Calculation of the interfacial tension of the methane-water system with the linear gradient theory

    DEFF Research Database (Denmark)

    Schmidt, Kurt A. G.; Folas, Georgios; Kvamme, Bjørn

    2007-01-01

    The linear gradient theory (LGT) combined with the Soave-Redlich-Kwong (SRK EoS) and the Peng-Robinson (PR EoS) equations of state has been used to correlate the interfacial tension data of the methane-water system. The pure component influence parameters and the binary interaction coefficient...... for the mixture influence parameter have been obtained for this system. The model was successfully applied to correlate the interfacial tension data set to within 2.3% for the linear gradient theory and the SRK EoS (LGT-SRK) and 2.5% for the linear gradient theory and PE EoS (LGT-PR). A posteriori comparison...... of data not used in the parameterisation were to within 3.2% for the LGT-SRK model and 2.7% for the LGT-PR model. An exhaustive literature review resulted in a large database for the investigation which covers a wide range of temperature and pressures. The results support the success of the linear...

  12. Dynamical theory of anomalous particle transport

    International Nuclear Information System (INIS)

    Meiss, J.D.; Cary, J.R.; Escande, D.F.; MacKay, R.S.; Percival, I.C.; Tennyson, J.L.

    1985-01-01

    The quasi-linear theory of transport applies only in a restricted parameter range, which does not necessarily correspond to experimental conditions. Theories are developed which extend transport calculations to the regimes of marginal stochasticity and strong turbulence. Near the stochastic threshold the description of transport involves the leakage through destroyed invariant surfaces, and the dynamical scaling theory is used to obtain a universal form for transport coefficients. In the strong-turbulence regime, there is an adiabatic invariant which is preserved except near separatrices. Breakdown of this invariant leads to a new form for the diffusion coefficient. (author)

  13. A critical experimental study of the classical tactile threshold theory

    Directory of Open Access Journals (Sweden)

    Medina Leonel E

    2010-06-01

    Full Text Available Abstract Background The tactile sense is being used in a variety of applications involving tactile human-machine interfaces. In a significant number of publications the classical threshold concept plays a central role in modelling and explaining psychophysical experimental results such as in stochastic resonance (SR phenomena. In SR, noise enhances detection of sub-threshold stimuli and the phenomenon is explained stating that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. We designed an experiment to test the validity of the classical vibrotactile threshold. Using a second choice experiment, we show that individuals can order sensorial events below the level known as the classical threshold. If the observer's sensorial system is not activated by stimuli below the threshold, then a second choice could not be above the chance level. Nevertheless, our experimental results are above that chance level contradicting the definition of the classical tactile threshold. Results We performed a three alternative forced choice detection experiment on 6 subjects asking them first and second choices. In each trial, only one of the intervals contained a stimulus and the others contained only noise. According to the classical threshold assumptions, a correct second choice response corresponds to a guess attempt with a statistical frequency of 50%. Results show an average of 67.35% (STD = 1.41% for the second choice response that is not explained by the classical threshold definition. Additionally, for low stimulus amplitudes, second choice correct detection is above chance level for any detectability level. Conclusions Using a second choice experiment, we show that individuals can order sensorial events below the level known as a classical threshold. If the observer's sensorial system is not activated by stimuli below the threshold, then a second choice could not be above the chance

  14. The spin polarized linear response from density functional theory: Theory and application to atoms

    Energy Technology Data Exchange (ETDEWEB)

    Fias, Stijn, E-mail: sfias@vub.ac.be; Boisdenghien, Zino; De Proft, Frank; Geerlings, Paul [General Chemistry (ALGC), Vrije Universiteit Brussel (Free University Brussels – VUB), Pleinlaan 2, 1050 Brussels (Belgium)

    2014-11-14

    Within the context of spin polarized conceptual density functional theory, the spin polarized linear response functions are introduced both in the [N, N{sub s}] and [N{sub α}, N{sub β}] representations. The mathematical relations between the spin polarized linear response functions in both representations are examined and an analytical expression for the spin polarized linear response functions in the [N{sub α}, N{sub β}] representation is derived. The spin polarized linear response functions were calculated for all atoms up to and including argon. To simplify the plotting of our results, we integrated χ(r, r′) to a quantity χ(r, r{sup ′}), circumventing the θ and ϕ dependence. This allows us to plot and to investigate the periodicity throughout the first three rows in the periodic table within the two different representations. For the first time, χ{sub αβ}(r, r{sup ′}), χ{sub βα}(r, r{sup ′}), and χ{sub SS}(r, r{sup ′}) plots have been calculated and discussed. By integration of the spin polarized linear response functions, different components to the polarisability, α{sub αα}, α{sub αβ}, α{sub βα}, and α{sub ββ} have been calculated.

  15. Quantum optimal control theory in the linear response formalism

    International Nuclear Information System (INIS)

    Castro, Alberto; Tokatly, I. V.

    2011-01-01

    Quantum optimal control theory (QOCT) aims at finding an external field that drives a quantum system in such a way that optimally achieves some predefined target. In practice, this normally means optimizing the value of some observable, a so-called merit function. In consequence, a key part of the theory is a set of equations, which provides the gradient of the merit function with respect to parameters that control the shape of the driving field. We show that these equations can be straightforwardly derived using the standard linear response theory, only requiring a minor generalization: the unperturbed Hamiltonian is allowed to be time dependent. As a result, the aforementioned gradients are identified with certain response functions. This identification leads to a natural reformulation of QOCT in terms of the Keldysh contour formalism of the quantum many-body theory. In particular, the gradients of the merit function can be calculated using the diagrammatic technique for nonequilibrium Green's functions, which should be helpful in the application of QOCT to computationally difficult many-electron problems.

  16. The linear hypothesis - an idea whose time has passed

    International Nuclear Information System (INIS)

    Tschaeche, A.N.

    1995-01-01

    The linear no-threshold hypothesis is the basis for radiation protection standards in the United States. In the words of the National Council on Radiation Protection and Measurements (NCRP), the hypothesis is: open-quotes In the interest of estimating effects in humans conservatively, it is not unreasonable to follow the assumption of a linear relationship between dose and effect in the low dose regions for which direct observational data are not available.close quotes The International Commission on Radiological Protection (ICRP) stated the hypothesis in a slightly different manner: open-quotes One such basic assumption ... is that ... there is ... a linear relationship without threshold between dose and the probability of an effect. The hypothesis was necessary 50 yr ago when it was first enunciated because the dose-effect curve for ionizing radiation for effects in humans was not known. The ICRP and NCRP needed a model to extrapolate high-dose effects to low-dose effects. So the linear no-threshold hypothesis was born. Certain details of the history of the development and use of the linear hypothesis are presented. In particular, use of the hypothesis by the U.S. regulatory agencies is examined. Over time, the sense of the hypothesis has been corrupted. The corruption of the hypothesis into the current paradigm of open-quote a little radiation, no matter how small, can and will harm youclose quotes is presented. The reasons the corruption occurred are proposed. The effects of the corruption are enumerated, specifically, the use of the corruption by the antinuclear forces in the United States and some of the huge costs to U.S. taxpayers due to the corruption. An alternative basis for radiation protection standards to assure public safety, based on the weight of scientific evidence on radiation health effects, is proposed

  17. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    Science.gov (United States)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  18. Generation companies decision-making modeling by linear control theory

    International Nuclear Information System (INIS)

    Gutierrez-Alcaraz, G.; Sheble, Gerald B.

    2010-01-01

    This paper proposes four decision-making procedures to be employed by electric generating companies as part of their bidding strategies when competing in an oligopolistic market: naive, forward, adaptive, and moving average expectations. Decision-making is formulated in a dynamic framework by using linear control theory. The results reveal that interactions among all GENCOs affect market dynamics. Several numerical examples are reported, and conclusions are presented. (author)

  19. Statistics of Smoothed Cosmic Fields in Perturbation Theory. I. Formulation and Useful Formulae in Second-Order Perturbation Theory

    Science.gov (United States)

    Matsubara, Takahiko

    2003-02-01

    We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.

  20. Global Surrogates for the Upshift of the Critical Threshold in the Gradient for ITG Driven Turbulence

    Science.gov (United States)

    Michoski, Craig; Janhunen, Salomon; Faghihi, Danial; Carey, Varis; Moser, Robert

    2017-10-01

    The suppression of micro-turbulence and ultimately the inhibition of large-scale instabilities observed in tokamak plasmas is partially characterized by the onset of a global stationary state. This stationary attractor corresponds experimentally to a state of ``marginal stability'' in the plasma. The critical threshold that characterizes the onset in the nonlinear regime is observed both experimentally and numerically to exhibit an upshift relative to the linear theory. That is, the onset in the stationary state is up-shifted from those predicted by the linear theory as a function of the ion temperature gradient R0 /LT . Because the transition to this state with enhanced transport and therefore reduced confinement times is inaccessible to the linear theory, strategies for developing nonlinear reduced physics models to predict the upshift have been ongoing. As a complement to these effort, the principle aim of this work is to establish low-fidelity surrogate models that can be used to predict instability driven loss of confinement using training data from high-fidelity models. DE-SC0008454 and DE-AC02-09CH11466.

  1. Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?

    Science.gov (United States)

    Gasbarro, Andrew

    2018-03-01

    In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.

  2. Why innovation theories make no sense

    OpenAIRE

    Moldaschl, Manfred

    2010-01-01

    In this paper I argue that it makes no sense to have "innovation theories", or the use of the concept in describing the potential of social and economic theories to explain the phenomenon of non-equilibrium. If we wish to explain dynamic, change, evolution, revolution, etc. in socio-economic systems, then theories that are genuinely capable of doing so are indispensable. We don't need static theories of society, economy, organization, the firm, etc. which need an "additional" theory of incong...

  3. The abundance threshold for plague as a critical percolation phenomenon

    DEFF Research Database (Denmark)

    Davis, S; Trapman, P; Leirs, H

    2008-01-01

    . However, no natural examples have been reported. The central question of interest in percolation theory 4 , the possibility of an infinite connected cluster, corresponds in infectious disease to a positive probability of an epidemic. Archived records of plague (infection with Yersinia pestis....... Abundance thresholds are the theoretical basis for attempts to manage infectious disease by reducing the abundance of susceptibles, including vaccination and the culling of wildlife 6, 7, 8 . This first natural example of a percolation threshold in a disease system invites a re-appraisal of other invasion...

  4. Relativistic mean-field theory for unstable nuclei with non-linear σ and ω terms

    International Nuclear Information System (INIS)

    Sugahara, Y.; Toki, H.

    1994-01-01

    We search for a new parameter set for the description of stable as well as unstable nuclei in the wide mass range within the relativistic mean-field theory. We include a non-linear ω self-coupling term in addition to the non-linear σ self-coupling terms, the necessity of which is suggested by the relativistic Brueckner-Hartree-Fock (RBHF) theory of nuclear matter. We find two parameter sets, one of which is for nuclei above Z=20 and the other for nuclei below that. The calculated results agree very well with the existing data for finite nuclei. The parameter set for the heavy nuclei provides the equation of state of nuclear matter similar to the one of the RBHF theory. ((orig.))

  5. Linear spin-wave theory of incommensurably modulated magnets

    DEFF Research Database (Denmark)

    Ziman, Timothy; Lindgård, Per-Anker

    1986-01-01

    Calculations of linearized theories of spin dynamics encounter difficulties when applied to incommensurable magnetic phases: lack of translational invariance leads to an infinite coupled system of equations. The authors resolve this for the case of a `single-Q' structure by mapping onto the problem......: at higher frequency there appear bands of response sharply defined in frequency, but broad in momentum transfer; at low frequencies there is a response maximum at the q vector corresponding to the modulation vector. They discuss generalizations necessary for application to rare-earth magnets...

  6. Linear canonical transforms theory and applications

    CERN Document Server

    Kutay, M; Ozaktas, Haldun; Sheridan, John

    2016-01-01

    This book provides a clear and accessible introduction to the essential mathematical foundations of linear canonical transforms from a signals and systems perspective. Substantial attention is devoted to how these transforms relate to optical systems and wave propagation. There is extensive coverage of sampling theory and fast algorithms for numerically approximating the family of transforms. Chapters on topics ranging from digital holography to speckle metrology provide a window on the wide range of applications. This volume will serve as a reference for researchers in the fields of image and signal processing, wave propagation, optical information processing and holography, optical system design and modeling, and quantum optics. It will be of use to graduate students in physics and engineering, as well as for scientists in other areas seeking to learn more about this important yet relatively unfamiliar class of integral transformations.

  7. Background field method in gauge theories and on linear sigma models

    International Nuclear Information System (INIS)

    van de Ven, A.E.M.

    1986-01-01

    This dissertation constitutes a study of the ultraviolet behavior of gauge theories and two-dimensional nonlinear sigma-models by means of the background field method. After a general introduction in chapter 1, chapter 2 presents algorithms which generate the divergent terms in the effective action at one-loop for arbitrary quantum field theories in flat spacetime of dimension d ≤ 11. It is demonstrated that global N = 1 supersymmetric Yang-Mills theory in six dimensions in one-loop UV-finite. Chapter 3 presents an algorithm which produces the divergent terms in the effective action at two-loops for renormalizable quantum field theories in a curved four-dimensional background spacetime. Chapter 4 presents a study of the two-loop UV-behavior of two-dimensional bosonic and supersymmetric non-linear sigma-models which include a Wess-Zumino-Witten term. It is found that, to this order, supersymmetric models on quasi-Ricci flat spaces are UV-finite and the β-functions for the bosonic model depend only on torsionful curvatures. Chapter 5 summarizes a superspace calculation of the four-loop β-function for two-dimensional N = 1 and N = 2 supersymmetric non-linear sigma-models. It is found that besides the one-loop contribution which vanishes on Ricci-flat spaces, the β-function receives four-loop contributions which do not vanish in the Ricci-flat case. Implications for superstrings are discussed. Chapters 6 and 7 treat the details of these calculations

  8. Competitive inhibition can linearize dose-response and generate a linear rectifier.

    Science.gov (United States)

    Savir, Yonatan; Tu, Benjamin P; Springer, Michael

    2015-09-23

    Many biological responses require a dynamic range that is larger than standard bi-molecular interactions allow, yet the also ability to remain off at low input. Here we mathematically show that an enzyme reaction system involving a combination of competitive inhibition, conservation of the total level of substrate and inhibitor, and positive feedback can behave like a linear rectifier-that is, a network motif with an input-output relationship that is linearly sensitive to substrate above a threshold but unresponsive below the threshold. We propose that the evolutionarily conserved yeast SAGA histone acetylation complex may possess the proper physiological response characteristics and molecular interactions needed to perform as a linear rectifier, and we suggest potential experiments to test this hypothesis. One implication of this work is that linear responses and linear rectifiers might be easier to evolve or synthetically construct than is currently appreciated.

  9. Threshold temperature gradient effect on migration of brine inclusions in salt

    International Nuclear Information System (INIS)

    Pigford, T.H.

    1987-01-01

    Theories of the migration of brine inclusions in salt were interpreted as simple physical processes, and theories by Russian and US workers were shown to yield the same results. The migration theory was used to predict threshold temperature gradients below which migration of brine inclusions should not occur. The predicted threshold gradients were compared with the temperature gradients expected at the Waste Isolation Pilot Plant in New Mexico. The theory of threshold gradients helps explain the existence of brine inclusions in natural salt deposits

  10. Introduction to Special Feature on Catastrophic Thresholds, Perspectives, Definitions, and Applications

    Directory of Open Access Journals (Sweden)

    Robert A. Washington-Allen

    2010-09-01

    Full Text Available The contributions to this special feature focus on several conceptual and operational applications for understanding non-linear behavior of complex systems with various ecological criteria at unique levels of organization. The organizing theme of the feature emphasizes alternative stable states or regimes and intervening thresholds that possess great relevance to ecology and natural resource management. The authors within this special feature address the conceptual models of catastrophe theory, self-organization, cross-scale interactions and time-scale calculus; develop operational definitions and procedures for understanding the occurrence of dynamic regimes or multiple stable states and thresholds; suggest diagnostics tools for detection of states and thresholds and contribute to the development of scaling laws; and finally, demonstrate applications that promote both greater ecological understanding and management prescriptions for insect and disease outbreaks, resource island formation, and characterization of ecological resilience. This Special Feature concludes with a synthesis of the commonalities and disparities of concepts and interpretations among the contributed papers to identify issues and approaches that merit further research emphasis.

  11. W-pair production near threshold in unstable particle effective theory

    Energy Technology Data Exchange (ETDEWEB)

    Falgari, Pietro

    2008-11-07

    In this thesis we present a dedicated study of the four-fermion production process e{sup +}e{sup -}{yields}{mu}{sup -} anti {nu}{sub {mu}}u anti dX near the W-pair production threshold, in view of its importance for a precise determination of the W-boson mass at the ILC. The calculation is performed in the framework of unstable-particle effective theory, which allows for a gauge-invariant inclusion of instability effects, and for a systematic approximation of the full cross section with an expansion in the coupling constants, the ratio {gamma}{sub W}/M{sub W}, and the non-relativistic velocity v of the W boson. The effective-theory result, computed to next-to-leading order in the expansion parameters {gamma}{sub W}/M{sub W}{proportional_to}{alpha}{sub ew}{proportional_to}v{sup 2}, is compared to the full numerical next-to-leading order calculation of the four-fermion production cross section, and agreement to better than 0.5% is found in the region of validity of the effective theory. Furthermore, we estimate the contributions of missing higher-order corrections to the four-fermion process, and how they translate into an error on the W-boson mass determination. We find that the dominant theoretical uncertainty on MW is currently due to an incomplete treatment of initial-state radiation, while the remaining combined uncertainty of the two NLO calculations translates into {delta}M{sub W}{approx} 5 MeV. The latter error is removed by an explicit computation of the dominant missing terms, which originate from the expansion in v of next-to-next-to-leading order Standard Model diagrams. The effect of resummation of logarithmically-enhanced terms is also investigated, but found to be negligible. (orig.)

  12. Polymer Percolation Threshold in Multi-Component HPMC Matrices Tablets

    Directory of Open Access Journals (Sweden)

    Maryam Maghsoodi

    2011-06-01

    Full Text Available Introduction: The percolation theory studies the critical points or percolation thresholds of the system, where onecomponent of the system undergoes a geometrical phase transition, starting to connect the whole system. The application of this theory to study the release rate of hydrophilic matrices allows toexplain the changes in release kinetics of swellable matrix type system and results in a clear improvement of the design of controlled release dosage forms. Methods: In this study, the percolation theory has been applied to multi-component hydroxypropylmethylcellulose (HPMC hydrophilic matrices. Matrix tablets have been prepared using phenobarbital as drug,magnesium stearate as a lubricant employing different amount of lactose and HPMC K4M as a fillerandmatrix forming material, respectively. Ethylcelullose (EC as a polymeric excipient was also examined. Dissolution studies were carried out using the paddle method. In order to estimate the percolation threshold, the behaviour of the kinetic parameters with respect to the volumetric fraction of HPMC at time zero, was studied. Results: In both HPMC/lactose and HPMC/EC/lactose matrices, from the point of view of the percolation theory, the optimum concentration for HPMC, to obtain a hydrophilic matrix system for the controlled release of phenobarbital is higher than 18.1% (v/v HPMC. Above 18.1% (v/v HPMC, an infinite cluster of HPMC would be formed maintaining integrity of the system and controlling the drug release from the matrices. According to results, EC had no significant influence on the HPMC percolation threshold. Conclusion: This may be related to broad functionality of the swelling hydrophilic matrices.

  13. Double Photoionization Near Threshold

    Science.gov (United States)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  14. Non-linear theory of elasticity and optimal design

    CERN Document Server

    Ratner, LW

    2003-01-01

    In order to select an optimal structure among possible similar structures, one needs to compare the elastic behavior of the structures. A new criterion that describes elastic behavior is the rate of change of deformation. Using this criterion, the safe dimensions of a structure that are required by the stress distributed in a structure can be calculated. The new non-linear theory of elasticity allows one to determine the actual individual limit of elasticity/failure of a structure using a simple non-destructive method of measurement of deformation on the model of a structure while presently it

  15. Linear extended neutron diffusion theory for semi-in finites homogeneous means

    International Nuclear Information System (INIS)

    Vazquez R, R.; Vazquez R, A.; Espinosa P, G.

    2009-10-01

    Originally developed for heterogeneous means, the linear extended neutron diffusion theory is applied to the limit case of monoenergetic neutron diffusion in a semi-infinite homogeneous mean with a neutron source, located in the coordinate origin situated in the frontier of dispersive material. The monoenergetic neutron diffusion is studied taking into account the spatial deviations in the neutron flux to the interfacial current caused by the neutron source, as well as the influence of the spatial deviations in the absorption rate. The developed pattern is an unidimensional model for an energy group obtained of application of volumetric average diffusion equation in the moderator. The obtained results are compared against the classic diffusion theory and qualitatively against the neutron transport theory. (Author)

  16. Threshold quantum cryptography

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding

  17. ONETEP: linear-scaling density-functional theory with plane-waves

    International Nuclear Information System (INIS)

    Haynes, P D; Mostof, A A; Skylaris, C-K; Payne, M C

    2006-01-01

    This paper provides a general overview of the methodology implemented in onetep (Order-N Electronic Total Energy Package), a parallel density-functional theory code for largescale first-principles quantum-mechanical calculations. The distinctive features of onetep are linear-scaling in both computational effort and resources, obtained by making well-controlled approximations which enable simulations to be performed with plane-wave accuracy. Titanium dioxide clusters of increasing size designed to mimic surfaces are studied to demonstrate the accuracy and scaling of onetep

  18. Linear kinetic theory and particle transport in stochastic mixtures

    International Nuclear Information System (INIS)

    Pomraning, G.C.

    1994-03-01

    The primary goal in this research is to develop a comprehensive theory of linear transport/kinetic theory in a stochastic mixture of solids and immiscible fluids. The statistics considered correspond to N-state discrete random variables for the interaction coefficients and sources, with N denoting the number of components of the mixture. The mixing statistics studied are Markovian as well as more general statistics, such as renewal processes. A further goal of this work is to demonstrate the applicability of the formalism to real world engineering problems. This three year program was initiated June 15, 1993 and has been underway nine months. Many significant results have been obtained, both in the formalism development and in representative applications. These results are summarized by listing the archival publications resulting from this grant, including the abstracts taken directly from the papers

  19. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    Science.gov (United States)

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  20. General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.

    Science.gov (United States)

    Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J

    2017-09-29

    The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

  1. Theory of soil decontamination in mixing liquid

    International Nuclear Information System (INIS)

    Polyakov, A.S.; Emets, E.P.; Poluehktov, P.P.; Rybakov, K.A.

    1997-01-01

    The theory of soil decontamination from radioactive pollution in mixing liquid flow is described. It is shown that there exists the threshold intensity of liquid mixing up to which there is no decontamination. Beyond the threshold and by increasing the mixing intensity the decontamination of large soil fractions is allowable whereby the higher is the mixing intensity and lower is the soil contamination, the laser is the characteristic decontamination time. The above theory is related to cases of uniform pollution of the particles surface

  2. Simulation and linear stability of traffic jams; Kotsu jutai no senkei anteisei to simulation

    Energy Technology Data Exchange (ETDEWEB)

    Muramatsu, M. [Shizuoka University, Shizuoka (Japan); Nagatani, T. [Shizuoka University, Shizuoka (Japan). Faculty of Engineering

    1999-05-25

    A traffic jam induced by slowing down is investigated using simulation techniques of molecular dynamics. When cars are decelerated by the presence of hindrance, two typical traffic jams occur behind the hindrance: one is an oscillating jam and the other is a homogeneous jam. When the slowing down is small, the oscillating jam occurs. If the slowing down is large, the jam is homogeneous over space and time. Also, a backward propagating soliton-like jam is observed. The linear stability theory is applied to the traffic flow. The phase boundary between the oscillating and homogeneous jams is compared with the neutral stability line obtained by the linear stability theory. (author)

  3. On the non-linear scale of cosmological perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-04-15

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  4. On the non-linear scale of cosmological perturbation theory

    International Nuclear Information System (INIS)

    Blas, Diego; Garny, Mathias; Konstandin, Thomas

    2013-04-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  5. On the non-linear scale of cosmological perturbation theory

    CERN Document Server

    Blas, Diego; Konstandin, Thomas

    2013-01-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  6. Linear theory of the Rayleigh-Taylor instability in the equatorial ionsophere

    International Nuclear Information System (INIS)

    Russel, D.A.; Ott, E.

    1979-01-01

    We present a liner theory of the Rayleigh-Taylor instability in the equatorial ionosphere. For a purely exponential density profile, we find that no unstable eigenmode solutions exist. For a particular model ionosphere with an F peak, unstable eigenmode solutions exist only for sufficiently small horizontal wave numbers. In the later case, purely exponential growth at a rate identical to that for the sharp boundary instability is found. To clarify the situation in the case that eigenmodes do not exist, we solve the initial value problem for the linearized ion equation of motion in the long time asymptotic limit. Ion inertia and ion-neutral collisions are included. Assuming straight magnetic field lines, we find that when eigenmodes do not exist the growth of the response to an impulse is slower than exponential viz, t=/sup -1/2/ exp (γ/sup t/) below the F peak and t/sup -3/2/ exp(γ/sup t/) above the peak; and we determine γ

  7. Threshold factorization redux

    Science.gov (United States)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  8. Point of no return: experimental determination of the lethal hydraulic threshold during drought for loblolly pine (Pinus taeda)

    Science.gov (United States)

    Hammond, W.; Yu, K.; Wilson, L. A.; Will, R.; Anderegg, W.; Adams, H. D.

    2017-12-01

    The strength of the terrestrial carbon sink—dominated by forests—remains one of the greatest uncertainties in climate change modelling. How forests will respond to increased variability in temperature and precipitation is poorly understood, and experimental study to better inform global vegetation models in this area is needed. Necessary for achieving­­­­ this goal is an understanding of how increased temperatures and drought will affect landscape level distributions of plant species. Quantifying physiological thresholds representing a point of no return from drought stress, including thresholds in hydraulic function, is critical to this end. Recent theoretical, observational, and modelling research has converged upon a threshold of 60 percent loss of hydraulic conductivity at mortality (PLClethal). However, direct experimental determination of lethal points in conductivity and cavitation during drought is lacking. We quantified thresholds in hydraulic function in Loblolly pine, Pinus taeda, a commercially important timber species. In a greenhouse experiment, we exposed saplings (n = 96 total) to drought and rewatered treatment groups at variable levels of increasing water stress determined by pre-selected targets in pre-dawn water potential. Treatments also included a watered control with no drought, and drought with no rewatering. We measured physiological responses to water stress, including hydraulic conductivity, native PLC, water potential, foliar color, canopy die-back, and dark-adapted chlorophyll fluorescence. Following the rewatering treatment, we observed saplings for at least two months to determine which survived and which died. Using these data we calculated lethal physiological thresholds in water potential, directly measured PLC, and PLC inferred from water potential using a hydraulic vulnerability curve. We found that PLClethal inferred from water potential agreed with the 60% threshold suggested by previous research. However, directly

  9. A simplified density matrix minimization for linear scaling self-consistent field theory

    International Nuclear Information System (INIS)

    Challacombe, M.

    1999-01-01

    A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics

  10. Stochastic field-line wandering in magnetic turbulence with shear. I. Quasi-linear theory

    Energy Technology Data Exchange (ETDEWEB)

    Shalchi, A. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Negrea, M.; Petrisor, I. [Department of Physics, University of Craiova, Association Euratom-MEdC, 13A.I.Cuza Str, 200585 Craiova (Romania)

    2016-07-15

    We investigate the random walk of magnetic field lines in magnetic turbulence with shear. In the first part of the series, we develop a quasi-linear theory in order to compute the diffusion coefficient of magnetic field lines. We derive general formulas for the diffusion coefficients in the different directions of space. We like to emphasize that we expect that quasi-linear theory is only valid if the so-called Kubo number is small. We consider two turbulence models as examples, namely, a noisy slab model as well as a Gaussian decorrelation model. For both models we compute the field line diffusion coefficients and we show how they depend on the aforementioned Kubo number as well as a shear parameter. It is demonstrated that the shear effect reduces all field line diffusion coefficients.

  11. Stochastic field-line wandering in magnetic turbulence with shear. I. Quasi-linear theory

    International Nuclear Information System (INIS)

    Shalchi, A.; Negrea, M.; Petrisor, I.

    2016-01-01

    We investigate the random walk of magnetic field lines in magnetic turbulence with shear. In the first part of the series, we develop a quasi-linear theory in order to compute the diffusion coefficient of magnetic field lines. We derive general formulas for the diffusion coefficients in the different directions of space. We like to emphasize that we expect that quasi-linear theory is only valid if the so-called Kubo number is small. We consider two turbulence models as examples, namely, a noisy slab model as well as a Gaussian decorrelation model. For both models we compute the field line diffusion coefficients and we show how they depend on the aforementioned Kubo number as well as a shear parameter. It is demonstrated that the shear effect reduces all field line diffusion coefficients.

  12. Natural excitation orbitals from linear response theories : Time-dependent density functional theory, time-dependent Hartree-Fock, and time-dependent natural orbital functional theory

    NARCIS (Netherlands)

    Van Meer, R.; Gritsenko, O. V.; Baerends, E. J.

    2017-01-01

    Straightforward interpretation of excitations is possible if they can be described as simple single orbital-to-orbital (or double, etc.) transitions. In linear response time-dependent density functional theory (LR-TDDFT), the (ground state) Kohn-Sham orbitals prove to be such an orbital basis. In

  13. Stochastic Finite Element Analysis of Non-Linear Structures Modelled by Plasticity Theory

    DEFF Research Database (Denmark)

    Frier, Christian; Sørensen, John Dalsgaard

    2003-01-01

    A Finite Element Reliability Method (FERM) is introduced to perform reliability analyses on two-dimensional structures in plane stress, modeled by non-linear plasticity theory. FERM is a coupling between the First Order Reliability Method (FORM) and the Finite Element Method (FEM). FERM can be us...

  14. Influence of magnetic flutter on tearing growth in linear and nonlinear theory

    Science.gov (United States)

    Kreifels, L.; Hornsby, W. A.; Weikl, A.; Peeters, A. G.

    2018-06-01

    Recent simulations of tearing modes in turbulent regimes show an unexpected enhancement in the growth rate. In this paper the effect is investigated analytically. The enhancement is linked to the influence of turbulent magnetic flutter, which is modelled by diffusion terms in magnetohydrodynamics (MHD) momentum balance and Ohm’s law. Expressions for the linear growth rate as well as the island width in nonlinear theory for small amplitudes are derived. The results indicate an enhanced linear growth rate and a larger linear layer width compared with resistive MHD. Also the island width in the nonlinear regime grows faster in the diffusive model. These observations correspond well to simulations in which the effect of turbulence on the magnetic island width and tearing mode growth is analyzed.

  15. Linear theory of density perturbations in a neutrino+baryon universe

    International Nuclear Information System (INIS)

    Wasserman, I.

    1981-01-01

    Various aspects of the linear theory of density perturbations in a universe containing a significant population of massive neutrinos are calculated. Because linear perturbations in the neutrino density are subject to nonviscous damping on length scales smaller than the effective neutrino Jeans length, the fluctuation spectrum of the neutrino density perturbations just after photon decoupling is expected to peak near the maximum neutrino Jeans mass. The gravitational effects of nonneutrino species are included in calculating the maximum neutrino Jeans mass, which is found to be [M/sub J/(t)]/sub max/approx.10 17 M/sub sun//[m/sub ν/(eV)] 2 , about an order of magnitude smaller than is obtained when nonneutrino species are ignored. An explicit expression for the nonviscous damping of neutrino density perturbations less massive than the maximum neutrino Jeans mass is derived. The linear evolution of density perturbations after photon decoupling is discussed. Of particular interest is the possibility that fluctuations in the neutrino density induce baryon density perturbations after photon decoupling and that the maximum neutrino Jeans determines the characteristic bound mass of galaxy clusters

  16. Classes and Theories of Trees Associated with a Class Of Linear Orders

    DEFF Research Database (Denmark)

    Goranko, Valentin; Kellerman, Ruaan

    2011-01-01

    Given a class of linear order types C, we identify and study several different classes of trees, naturally associated with C in terms of how the paths in those trees are related to the order types belonging to C. We investigate and completely determine the set-theoretic relationships between...... these classes of trees and between their corresponding first-order theories. We then obtain some general results about the axiomatization of the first-order theories of some of these classes of trees in terms of the first-order theory of the generating class C, and indicate the problems obstructing such general...... results for the other classes. These problems arise from the possible existence of nondefinable paths in trees, that need not satisfy the first-order theory of C, so we have started analysing first order definable and undefinable paths in trees....

  17. Thresholds of parametric instabilities near the lower hybrid frequency

    International Nuclear Information System (INIS)

    Berger, R.L.; Perkins, F.W.

    1975-06-01

    Resonant decay instabilities of a pump wave with frequency ω 0 near the lower-hybrid frequency ω/sub LH/ are analyzed with respect to the wavenumber k of the decay waves and the ratio ω 0 /ω/sub LH/ to determine the decay process with the minimum threshold. It was found that the lowest thresholds are for decay into an electron plasma (lower hybrid) wave plus either a backward ion-cyclotron wave, an ion Bernstein wave, or a low frequency sound wave. For ω 0 less than (2ω/sub LH/)/sup 1 / 2 /, it was found that these decay processes can occur and have faster growth than ion quasimodes provided the drift velocity (cE 0 /B 0 ) is much less than the sound speed. In many cases of interest, electromagnetic corrections to the lower-hybrid wave rule out decay into all but short wavelength (k rho/sub i/ greater than 1) waves. The experimental results are consistent with the linear theory of parametric instabilities in a homogeneous plasma. (U.S.)

  18. Linear response theory for magnetic Schrodinger operators in disordered media

    CERN Document Server

    Bouclet, J M; Klein, A; Schenker, J

    2004-01-01

    We justify the linear response theory for an ergodic Schrodinger operator with magnetic field within the non-interacting particle approximation, and derive a Kubo formula for the electric conductivity tensor. To achieve that, we construct suitable normed spaces of measurable covariant operators where the Liouville equation can be solved uniquely. If the Fermi level falls into a region of localization, we recover the well-known Kubo-Streda formula for the quantum Hall conductivity at zero temperature.

  19. Threshold velocity for environmentally-assisted cracking in low alloy steels

    International Nuclear Information System (INIS)

    Wire, G.L.; Kandra, J.T.

    1997-01-01

    Environmentally Assisted Cracking (EAC) in low alloy steels is generally believed to be activated by dissolution of MnS inclusions at the crack tip in high temperature LWR environments. EAC is the increase of fatigue crack growth rate of up to 40 to 100 times the rate in air that occurs in high temperature LWR environments. A steady state theory developed by Combrade, suggested that EAC will initiate only above a critical crack velocity and cease below this same velocity. A range of about twenty in critical crack tip velocities was invoked by Combrade, et al., to describe data available at that time. This range was attributed to exposure of additional sulfides above and below the crack plane. However, direct measurements of exposed sulfide densities on cracked specimens were performed herein and the results rule out significant additional sulfide exposure as a plausible explanation. Alternatively, it is proposed herein that localized EAC starting at large sulfide clusters reduces the calculated threshold velocity from the value predicted for a uniform distribution of sulfides. Calculations are compared with experimental results where the threshold velocity has been measured, and the predicted wide range of threshold values for steels of similar sulfur content but varying sulfide morphology is observed. The threshold velocity decreases with the increasing maximum sulfide particle size, qualitatively consistent with the theory. The calculation provides a basis for a conservative minimum velocity threshold tied directly to the steel sulfur level, in cases where no details of sulfide distribution are known

  20. Solar Wind Proton Temperature Anisotropy: Linear Theory and WIND/SWE Observations

    Science.gov (United States)

    Hellinger, P.; Travnicek, P.; Kasper, J. C.; Lazarus, A. J.

    2006-01-01

    We present a comparison between WIND/SWE observations (Kasper et al., 2006) of beta parallel to p and T perpendicular to p/T parallel to p (where beta parallel to p is the proton parallel beta and T perpendicular to p and T parallel to p are the perpendicular and parallel proton are the perpendicular and parallel proton temperatures, respectively; here parallel and perpendicular indicate directions with respect to the ambient magnetic field) and predictions of the Vlasov linear theory. In the slow solar wind, the observed proton temperature anisotropy seems to be constrained by oblique instabilities, by the mirror one and the oblique fire hose, contrary to the results of the linear theory which predicts a dominance of the proton cyclotron instability and the parallel fire hose. The fast solar wind core protons exhibit an anticorrelation between beta parallel to c and T perpendicular to c/T parallel to c (where beta parallel to c is the core proton parallel beta and T perpendicular to c and T parallel to c are the perpendicular and parallel core proton temperatures, respectively) similar to that observed in the HELIOS data (Marsch et al., 2004).

  1. Theory of linear physical systems theory of physical systems from the viewpoint of classical dynamics, including Fourier methods

    CERN Document Server

    Guillemin, Ernst A

    2013-01-01

    An eminent electrical engineer and authority on linear system theory presents this advanced treatise, which approaches the subject from the viewpoint of classical dynamics and covers Fourier methods. This volume will assist upper-level undergraduates and graduate students in moving from introductory courses toward an understanding of advanced network synthesis. 1963 edition.

  2. Stability Analysis of Continuous-Time and Discrete-Time Quaternion-Valued Neural Networks With Linear Threshold Neurons.

    Science.gov (United States)

    Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong

    2018-07-01

    This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.

  3. Correlated Keldysh-Faisal-Reiss theory of above-threshold double ionization of He in intense laser fields

    International Nuclear Information System (INIS)

    Becker, A.; Faisal, F.H.M.

    1994-01-01

    We have developed a correlated Keldysh-Faisal-Reiss theory of laser-induced double ionization of a two-electron atom. The basic N-photon T matrix and the expression for N-photon triple-differential rates or cross sections (TDCS's) are derived. The theory is applied to investigate the TDCS's for very-high-order multiphoton double ionization of He with lasers of wavelength λ=248 nm and λ=617 nm. Comparison with the uncorrelated results reveals a dramatic influence of the final-state e-e correlation on the above-threshold TDCS's to be measured in coincidence experiments in intense laser fields. The limiting case of the TDCS's for weak-field double ionization of He by a synchrotron photon is also investigated; the results confirm the earlier theoretical findings and recent experimental results in that case

  4. A statistical theory of cell killing by radiation of varying linear energy transfer

    International Nuclear Information System (INIS)

    Hawkins, R.B.

    1994-01-01

    A theory is presented that provides an explanation for the observed features of the survival of cultured cells after exposure to densely ionizing high-linear energy transfer (LET) radiation. It starts from a phenomenological postulate based on the linear-quadratic form of cell survival observed for low-LET radiation and uses principles of statistics and fluctuation theory to demonstrate that the effect of varying LET on cell survival can be attributed to random variation of dose to small volumes contained within the nucleus. A simple relation is presented for surviving fraction of cells after exposure to radiation of varying LET that depends on the α and β parameters for the same cells in the limit of low-LET radiation. This relation implies that the value of β is independent of LET. Agreement of the theory with selected observations of cell survival from the literature is demonstrated. A relation is presented that gives relative biological effectiveness (RBE) as a function of the α and β parameters for low-LET radiation. Measurements from microdosimetry are used to estimate the size of the subnuclear volume to which the fluctuation pertains. 11 refs., 4 figs., 2 tabs

  5. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  6. Digital linear control theory applied to automatic stepsize control in electrical circuit simulation

    NARCIS (Netherlands)

    Verhoeven, A.; Beelen, T.G.J.; Hautus, M.L.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    Adaptive stepsize control is used to control the local errors of the numerical solution. For optimization purposes smoother stepsize controllers are wanted, such that the errors and stepsizes also behave smoothly. We consider approaches from digital linear control theory applied to multistep

  7. Digital linear control theory applied to automatic stepsize control in electrical circuit simulation

    NARCIS (Netherlands)

    Verhoeven, A.; Beelen, T.G.J.; Hautus, M.L.J.; Maten, ter E.J.W.

    2005-01-01

    Adaptive stepsize control is used to control the local errors of the numerical solution. For optimization purposes smoother stepsize controllers are wanted, such that the errors and stepsizes also behave smoothly. We consider approaches from digital linear control theory applied to multistep

  8. Extended Hartree-Fock-Bogoliubov theory for degenerate Bose systems

    International Nuclear Information System (INIS)

    Tommasini, Paolo; Passos, E J V de; Pires, M O C; Piza, A F R de Toledo

    2005-01-01

    An extension of the Hartree-Fock-Bogoliubov (HFB) theory of degenerate Bose systems in which the coupling between one and two quasi-particles is taken into account is developed. The excitation operators are written as linear combinations of one and two HFB quasi-particles. Excitation energies and quasi-particle amplitudes are given by generalized Bogoliubov equations. The excitation spectrum has two branches. The first one is a discrete branch which is gapless and has a phonon character at large wavelength and, contrarily to HFB, is always stable. This branch is detached from a second, continuum branch whose threshold, at fixed total momentum, coincides with the two quasi-particle threshold of the HFB theory. The gap between the two branches at P = 0 is twice the HFB gap, which thus provides for the relevant energy scale. Numerical results for a specific case are given

  9. The influence of thresholds on the risk assessment of carcinogens in food.

    Science.gov (United States)

    Pratt, Iona; Barlow, Susan; Kleiner, Juliane; Larsen, John Christian

    2009-08-01

    The risks from exposure to chemical contaminants in food must be scientifically assessed, in order to safeguard the health of consumers. Risk assessment of chemical contaminants that are both genotoxic and carcinogenic presents particular difficulties, since the effects of such substances are normally regarded as being without a threshold. No safe level can therefore be defined, and this has implications for both risk management and risk communication. Risk management of these substances in food has traditionally involved application of the ALARA (As Low as Reasonably Achievable) principle, however ALARA does not enable risk managers to assess the urgency and extent of the risk reduction measures needed. A more refined approach is needed, and several such approaches have been developed. Low-dose linear extrapolation from animal carcinogenicity studies or epidemiological studies to estimate risks for humans at low exposure levels has been applied by a number of regulatory bodies, while more recently the Margin of Exposure (MOE) approach has been applied by both the European Food Safety Authority and the Joint FAO/WHO Expert Committee on Food Additives. A further approach is the Threshold of Toxicological Concern (TTC), which establishes exposure thresholds for chemicals present in food, dependent on structure. Recent experimental evidence that genotoxic responses may be thresholded has significant implications for the risk assessment of chemicals that are both genotoxic and carcinogenic. In relation to existing approaches such as linear extrapolation, MOE and TTC, the existence of a threshold reduces the uncertainties inherent in such methodology and improves confidence in the risk assessment. However, for the foreseeable future, regulatory decisions based on the concept of thresholds for genotoxic carcinogens are likely to be taken case-by-case, based on convincing data on the Mode of Action indicating that the rate limiting variable for the development of cancer

  10. Synthetic Domain Theory and Models of Linear Abadi & Plotkin Logic

    DEFF Research Database (Denmark)

    Møgelberg, Rasmus Ejlers; Birkedal, Lars; Rosolini, Guiseppe

    2008-01-01

    Plotkin suggested using a polymorphic dual intuitionistic/linear type theory (PILLY) as a metalanguage for parametric polymorphism and recursion. In recent work the first two authors and R.L. Petersen have defined a notion of parametric LAPL-structure, which are models of PILLY, in which one can...... reason using parametricity and, for example, solve a large class of domain equations, as suggested by Plotkin.In this paper, we show how an interpretation of a strict version of Bierman, Pitts and Russo's language Lily into synthetic domain theory presented by Simpson and Rosolini gives rise...... to a parametric LAPL-structure. This adds to the evidence that the notion of LAPL-structure is a general notion, suitable for treating many different parametric models, and it provides formal proofs of consequences of parametricity expected to hold for the interpretation. Finally, we show how these results...

  11. Non-linear wave loads and ship responses by a time-domain strip theory

    DEFF Research Database (Denmark)

    Xia, Jinzhu; Wang, Zhaohui; Jensen, Jørgen Juncher

    1998-01-01

    . Based on this time-domain strip theory, an efficient non-linear hydroelastic method of wave- and slamming-induced vertical motions and structural responses of ships is developed, where the structure is represented as a Timoshenko beam. Numerical calculations are presented for the S175 Containership...

  12. Application of linear and higher perturbation theory in reactor physics

    International Nuclear Information System (INIS)

    Woerner, D.

    1978-01-01

    For small perturbations in the material composition of a reactor according to the first approximation of perturbation theory the eigenvalue perturbation is proportional to the perturbation of the system. This assumption is true for the neutron flux not influenced by the perturbance. The two-dimensional code LINESTO developed for such problems in this paper on the basis of diffusion theory determines the relative change of the multiplication constant. For perturbations varying the neutron flux in the space of energy and position the eigenvalue perturbation is also influenced by this changed neutron flux. In such cases linear perturbation theory yields larger errors. Starting from the methods of calculus of variations there is additionally developed in this paper a perturbation method of calculation permitting in a quick and simple manner to assess the influence of flux perturbation on the eigenvalue perturbation. While the source of perturbations is evaluated in isotropic approximation of diffusion theory the associated inhomogeneous equation may be used to determine the flux perturbation by means of diffusion or transport theory. Possibilities of application and limitations of this method are studied in further systematic investigations on local perturbations. It is shown that with the integrated code system developed in this paper a number of local perturbations may be checked requiring little computing time. With it flux perturbations in first approximation and perturbations of the multiplication constant in second approximation can be evaluated. (orig./RW) [de

  13. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    excellent agreement between the predicted and experimental IFTs at high and moderate levels of IFTs, while the agreement is reasonably accurate in the near-critical region as the used equations of state reveal classical scaling behavior. To predict accurately low IFTs (sigma ... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile....

  14. Linear algebraic theory of partial coherence: discrete fields and measures of partial coherence.

    Science.gov (United States)

    Ozaktas, Haldun M; Yüksel, Serdar; Kutay, M Alper

    2002-08-01

    A linear algebraic theory of partial coherence is presented that allows precise mathematical definitions of concepts such as coherence and incoherence. This not only provides new perspectives and insights but also allows us to employ the conceptual and algebraic tools of linear algebra in applications. We define several scalar measures of the degree of partial coherence of an optical field that are zero for full incoherence and unity for full coherence. The mathematical definitions are related to our physical understanding of the corresponding concepts by considering them in the context of Young's experiment.

  15. Shear-transformation-zone theory of linear glassy dynamics.

    Science.gov (United States)

    Bouchbinder, Eran; Langer, J S

    2011-06-01

    We present a linearized shear-transformation-zone (STZ) theory of glassy dynamics in which the internal STZ transition rates are characterized by a broad distribution of activation barriers. For slowly aging or fully aged systems, the main features of the barrier-height distribution are determined by the effective temperature and other near-equilibrium properties of the configurational degrees of freedom. Our theory accounts for the wide range of relaxation rates observed in both metallic glasses and soft glassy materials such as colloidal suspensions. We find that the frequency-dependent loss modulus is not just a superposition of Maxwell modes. Rather, it exhibits an α peak that rises near the viscous relaxation rate and, for nearly jammed, glassy systems, extends to much higher frequencies in accord with experimental observations. We also use this theory to compute strain recovery following a period of large, persistent deformation and then abrupt unloading. We find that strain recovery is determined in part by the initial barrier-height distribution, but that true structural aging also occurs during this process and determines the system's response to subsequent perturbations. In particular, we find by comparison with experimental data that the initial deformation produces a highly disordered state with a large population of low activation barriers, and that this state relaxes quickly toward one in which the distribution is dominated by the high barriers predicted by the near-equilibrium analysis. The nonequilibrium dynamics of the barrier-height distribution is the most important of the issues raised and left unresolved in this paper.

  16. Using system theory and energy methods to prove existence of non-linear PDE's

    NARCIS (Netherlands)

    Zwart, H.J.

    2015-01-01

    In this discussion paper we present an idea of combining techniques known from systems theory with energy estimates to show existence for a class of non-linear partial differential equations (PDE's). At the end of the paper a list of research questions with possible approaches is given.

  17. Two-fluid static spherical configurations with linear mass function in the Einstein-Cartan theory

    International Nuclear Information System (INIS)

    Gallakhmetov, A.M.

    2002-01-01

    In the framework of the Einstein-Cartan theory, two-fluid static spherical configurations with linear mass function are considered. One of these modelling anisotropic matter distributions within star and the other fluid is a perfect fluid representing a source of torsion. It is shown that the solutions of the Einstein equations for anisotropic relativistic spheres in General Relativity may generate the solutions in the Einstein-Cartan theory. Some exact solutions are obtained

  18. A comparison of signal detection theory to the objective threshold/strategic model of unconscious perception.

    Science.gov (United States)

    Haase, Steven J; Fisk, Gary D

    2011-08-01

    A key problem in unconscious perception research is ruling out the possibility that weak conscious awareness of stimuli might explain the results. In the present study, signal detection theory was compared with the objective threshold/strategic model as explanations of results for detection and identification sensitivity in a commonly used unconscious perception task. In the task, 64 undergraduate participants detected and identified one of four briefly displayed, visually masked letters. Identification was significantly above baseline (i.e., proportion correct > .25) at the highest detection confidence rating. This result is most consistent with signal detection theory's continuum of sensory states and serves as a possible index of conscious perception. However, there was limited support for the other model in the form of a predicted "looker's inhibition" effect, which produced identification performance that was significantly below baseline. One additional result, an interaction between the target stimulus and type of mask, raised concerns for the generality of unconscious perception effects.

  19. Double linearization theory applied to three-dimensional cascades oscillating under supersonic axial flow condition. Choonsoku jikuryu sokudo de sadosuru sanjigen shindo yokuretsu no niju senkei riron ni yoru hiteijo kukiryoku kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, K; Nanba, M [Kgushu University, Fukuoka (Japan). Faculty of Engineering; Iwai, S [Mitsubishi Heavy Industries, Ltd., Tokyo (Japan)

    1993-11-25

    In order to examine the aerodynamic characteristics of a supersonic axial flow turbofan realizing flight of Mach number of 2-5, the double linearization theory was applied to a three dimensional oscillation cascade accompanying a steady load in a supersonic axial flow condition and unsteady pneumatic force and aerodynamic unstability of oscillation were studied. Moreover, the values based on the strip theory and the three-dimensional theory were comparatively evaluated. Fundamental assumptions were such that the order of steady and unsteady perturbation satisfies the holding condition of the double linearization thory in a supersonic-and equi-entropy flow of non-viscous perfect gas. The numerical calculation assumed parabolic distributions of camber and thickness in the blade shape. As a result, the strip theory prediction agreed well with the value given by the three-dimensional theory in the steady blade-plane pressure difference and in the work of an unsteady pneumatic force, showing its validity. Among the steady load components of angle of attack, camber and thickness, the component of camber whose absolute value is large has the strongest effect on the total work. The distribution reduced in the angle of attack and camber from hub toward tip gives a large and stable flutter margin. 5 refs., 13 figs., 2 tabs.

  20. The oscillatory behavior of heated channels: an analysis of the density effect. Part I. The mechanism (non linear analysis). Part II. The oscillations thresholds (linearized analysis)

    International Nuclear Information System (INIS)

    Boure, J.

    1967-01-01

    The problem of the oscillatory behavior of heated channels is presented in terms of delay-times and a density effect model is proposed to explain the behavior. The density effect is the consequence of the physical relationship between enthalpy and density of the fluid. In the first part non-linear equations are derived from the model in a dimensionless form. A description of the mechanism of oscillations is given, based on the analysis of the equations. An inventory of the governing parameters is established. At this point of the study, some facts in agreement with the experiments can be pointed out. In the second part the start of the oscillatory behavior of heated channels is studied in terms of the density effect. The threshold equations are derived, after linearization of the equations obtained in Part I. They can be solved rigorously by numerical methods to yield: -1) a relation between the describing parameters at the onset of oscillations, and -2) the frequency of the oscillations. By comparing the results predicted by the model to the experimental behavior of actual systems, the density effect is very often shown to be the actual cause of oscillatory behaviors. (author) [fr

  1. Multiscalar production amplitudes beyond threshold

    CERN Document Server

    Argyres, E N; Kleiss, R H

    1993-01-01

    We present exact tree-order amplitudes for $H^* \\to n~H$, for final states containing one or two particles with non-zero three-momentum, for various interaction potentials. We show that there are potentials leading to tree amplitudes that satisfy unitarity, not only at threshold but also in the above kinematical configurations and probably beyond. As a by-product, we also calculate $2\\to n$ tree amplitudes at threshold and show that for the unbroken $\\phi^4$ theory they vanish for $n>4~$, for the Standard Model Higgs they vanish for $n\\ge 3~$ and for a model potential, respecting tree-order unitarity, for $n$ even and $n>4~$. Finally, we calculate the imaginary part of the one-loop $1\\to n$ amplitude in both symmetric and spontaneously broken $\\phi^4$ theory.

  2. Top quark threshold scan and study of detectors for highly granular hadron calorimeters at future linear colliders

    International Nuclear Information System (INIS)

    Tesar, Michal

    2014-01-01

    Two major projects for future linear electron-positron colliders, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC), are currently under development. These projects can be seen as complementary machines to the Large Hadron Collider (LHC) which permit a further progress in high energy physics research. They overlap considerably and share the same technological approaches. To meet the ambitious goals of precise measurements, new detector concepts like very finely segmented calorimeters are required. We study the precision of the top quark mass measurement achievable at CLIC and the ILC. The employed method was a t anti t pair production threshold scan. In this technique, simulated measurement points of the t anti t production cross section around the threshold are fitted with theoretical curves calculated at next-to-next-to-leading order. Detector effects, the influence of the beam energy spectrum and initial state radiation of the colliding particles are taken into account. Assuming total integrated luminosity of 100 fb -1 , our results show that the top quark mass in a theoretically well-defined 1S mass scheme can be extracted with a combined statistical and systematic uncertainty of less than 50 MeV. The other part of this work regards experimental studies of highly granular hadron calorimeter (HCAL) elements. To meet the required high jet energy resolution at the future linear colliders, a large and finely segmented detector is needed. One option is to assemble a sandwich calorimeter out of many low-cost scintillators read out by silicon photomultipliers (SiPM). We characterize the areal homogeneity of SiPM response with the help of a highly collimated beam of pulsed visible light. The spatial resolution of the experiment reach the order of 1 μm and allows to study the active area structures within single SiPM microcells. Several SiPM models are characterized in terms of relative photon detection efficiency and probability crosstalk

  3. Microscopic theory of ultrafast spin linear reversal

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, G P, E-mail: gpzhang@indstate.edu [Department of Physics, Indiana State University, Terre Haute, IN 47809 (United States)

    2011-05-25

    A recent experiment (Vahaplar et al 2009 Phys. Rev. Lett. 103 117201) showed that a single femtosecond laser can reverse the spin direction without spin precession, or spin linear reversal (SLR), but its microscopic theory has been missing. Here we show that SLR does not occur naturally. Two generic spin models, the Heisenberg and Hubbard models, are employed to describe magnetic insulators and metals, respectively. We find analytically that the spin change is always accompanied by a simultaneous excitation of at least two spin components. The only model that has prospects for SLR is the Stoner single-electron band model. However, under the influence of the laser field, the orbital angular momenta are excited and are coupled to each other. If a circularly polarized light is used, then all three components of the orbital angular momenta are excited, and so are their spins. The generic spin commutation relation further reveals that if SLR exists, it must involve a complicated multiple state excitation.

  4. Detecting fatigue thresholds from electromyographic signals: A systematic review on approaches and methodologies.

    Science.gov (United States)

    Ertl, Peter; Kruse, Annika; Tilp, Markus

    2016-10-01

    The aim of the current paper was to systematically review the relevant existing electromyographic threshold concepts within the literature. The electronic databases MEDLINE and SCOPUS were screened for papers published between January 1980 and April 2015 including the keywords: neuromuscular fatigue threshold, anaerobic threshold, electromyographic threshold, muscular fatigue, aerobic-anaerobictransition, ventilatory threshold, exercise testing, and cycle-ergometer. 32 articles were assessed with regard to their electromyographic methodologies, description of results, statistical analysis and test protocols. Only one article was of very good quality. 21 were of good quality and two articles were of very low quality. The review process revealed that: (i) there is consistent evidence of one or two non-linear increases of EMG that might reflect the additional recruitment of motor units (MU) or different fiber types during fatiguing cycle ergometer exercise, (ii) most studies reported no statistically significant difference between electromyographic and metabolic thresholds, (iii) one minute protocols with increments between 10 and 25W appear most appropriate to detect muscular threshold, (iv) threshold detection from the vastus medialis, vastus lateralis, and rectus femoris is recommended, and (v) there is a great variety in study protocols, measurement techniques, and data processing. Therefore, we recommend further research and standardization in the detection of EMGTs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Effect of threshold quantization in opportunistic splitting algorithm

    KAUST Repository

    Nam, Haewoon

    2011-12-01

    This paper discusses algorithms to find the optimal threshold and also investigates the impact of threshold quantization on the scheduling outage performance of the opportunistic splitting scheduling algorithm. Since this algorithm aims at finding the user with the highest channel quality within the minimal number of mini-slots by adjusting the threshold every mini-slot, optimizing the threshold is of paramount importance. Hence, in this paper we first discuss how to compute the optimal threshold along with two tight approximations for the optimal threshold. Closed-form expressions are provided for those approximations for simple calculations. Then, we consider linear quantization of the threshold to take the limited number of bits for signaling messages in practical systems into consideration. Due to the limited granularity for the quantized threshold value, an irreducible scheduling outage floor is observed. The numerical results show that the two approximations offer lower scheduling outage probability floors compared to the conventional algorithm when the threshold is quantized. © 2006 IEEE.

  6. Conformal field theory with two kinds of Bosonic fields and two linear dilatons

    International Nuclear Information System (INIS)

    Kamani, Davoud

    2010-01-01

    We consider a two-dimensional conformal field theory which contains two kinds of the bosonic degrees of freedom. Two linear dilaton fields enable to study a more general case. Various properties of the model such as OPEs, central charge, conformal properties of the fields and associated algebras will be studied. (author)

  7. Reaction πN → ππN near threshold

    International Nuclear Information System (INIS)

    Frlez, E.

    1993-11-01

    The LAMPF E1179 experiment used the π 0 spectrometer and an array of charged particle range counters to detect and record π + π 0 , π 0 p, and π + π 0 p coincidences following the reaction π + p → π 0 π + p near threshold. The total cross sections for single pion production were measured at the incident pion kinetic energies 190, 200, 220, 240, and 260 MeV. Absolute normalizations were fixed by measuring π + p elastic scattering at 260 MeV. A detailed analysis of the π 0 detection efficiency was performed using cosmic ray calibrations and pion single charge exchange measurements with a 30 MeV π - beam. All published data on πN → ππN, including our results, are simultaneously fitted to yield a common chiral symmetry breaking parameter ξ =-0.25±0.10. The threshold matrix element |α 0 (π 0 π + p)| determined by linear extrapolation yields the value of the s-wave isospin-2 ππ scattering length α 0 2 (ππ) = -0.041±0.003 m π -1 , within the framework of soft-pion theory

  8. Higgs-Stoponium Mixing Near the Stop-Antistop Threshold

    CERN Document Server

    Bodwin, Geoffrey T; Wagner, Carlos E M

    2016-01-01

    Supersymmetric extensions of the standard model contain additional heavy neutral Higgs bosons that are coupled to heavy scalar top quarks (stops). This system exhibits interesting field theoretic phenomena when the Higgs mass is close to the stop-antistop production threshold. Existing work in the literature has examined the digluon-to-diphoton cross section near threshold and has focused on enhancements in the cross section that might arise either from the perturbative contributions to the Higgs-to-digluon and Higgs-to-diphoton form factors or from mixing of the Higgs boson with stoponium states. Near threshold, enhancements in the relevant amplitudes that go as inverse powers of the stop-antistop relative velocity require resummations of perturbation theory and/or nonperturbative treatments. We present a complete formulation of threshold effects at leading order in the stop-antistop relative velocity in terms of nonrelativistic effective field theory. We give detailed numerical calculations for the case in ...

  9. Scaling of the low-energy structure in above-threshold ionization in the tunneling regime: theory and experiment.

    Science.gov (United States)

    Guo, L; Han, S S; Liu, X; Cheng, Y; Xu, Z Z; Fan, J; Chen, J; Chen, S G; Becker, W; Blaga, C I; DiChiara, A D; Sistrunk, E; Agostini, P; DiMauro, L F

    2013-01-04

    A calculation of the second-order (rescattering) term in the S-matrix expansion of above-threshold ionization is presented for the case when the binding potential is the unscreened Coulomb potential. Technical problems related to the divergence of the Coulomb scattering amplitude are avoided in the theory by considering the depletion of the atomic ground state due to the applied laser field, which is well defined and does not require the introduction of a screening constant. We focus on the low-energy structure, which was observed in recent experiments with a midinfrared wavelength laser field. Both the spectra and, in particular, the observed scaling versus the Keldysh parameter and the ponderomotive energy are reproduced. The theory provides evidence that the origin of the structure lies in the long-range Coulomb interaction.

  10. Non-linearities in Theory-of-Mind Development.

    Science.gov (United States)

    Blijd-Hoogewys, Els M A; van Geert, Paul L C

    2016-01-01

    Research on Theory-of-Mind (ToM) has mainly focused on ages of core ToM development. This article follows a quantitative approach focusing on the level of ToM understanding on a measurement scale, the ToM Storybooks, in 324 typically developing children between 3 and 11 years of age. It deals with the eventual occurrence of developmental non-linearities in ToM functioning, using smoothing techniques, dynamic growth model building and additional indicators, namely moving skewness, moving growth rate changes and moving variability. The ToM sum-scores showed an overall developmental trend that leveled off toward the age of 10 years. Within this overall trend two non-linearities in the group-based change pattern were found: a plateau at the age of around 56 months and a dip at the age of 72-78 months. These temporary regressions in ToM sum-score were accompanied by a decrease in growth rate and variability, and a change in skewness of the ToM data, all suggesting a developmental shift in ToM understanding. The temporary decreases also occurred in the different ToM sub-scores and most clearly so in the core ToM component of beliefs. It was also found that girls had an earlier growth spurt than boys and that the underlying developmental path was more salient in girls than in boys. The consequences of these findings are discussed from various theoretical points of view, with an emphasis on a dynamic systems interpretation of the underlying developmental paths.

  11. Modern linear control design a time-domain approach

    CERN Document Server

    Caravani, Paolo

    2013-01-01

    This book offers a compact introduction to modern linear control design.  The simplified overview presented of linear time-domain methodology paves the road for the study of more advanced non-linear techniques. Only rudimentary knowledge of linear systems theory is assumed - no use of Laplace transforms or frequency design tools is required. Emphasis is placed on assumptions and logical implications, rather than abstract completeness; on interpretation and physical meaning, rather than theoretical formalism; on results and solutions, rather than derivation or solvability.  The topics covered include transient performance and stabilization via state or output feedback; disturbance attenuation and robust control; regional eigenvalue assignment and constraints on input or output variables; asymptotic regulation and disturbance rejection. Lyapunov theory and Linear Matrix Inequalities (LMI) are discussed as key design methods. All methods are demonstrated with MATLAB to promote practical use and comprehension. ...

  12. Linear methods in band theory

    DEFF Research Database (Denmark)

    Andersen, O. Krogh

    1975-01-01

    of Korringa-Kohn-Rostoker, linear-combination-of-atomic-orbitals, and cellular methods; the secular matrix is linear in energy, the overlap integrals factorize as potential parameters and structure constants, the latter are canonical in the sense that they neither depend on the energy nor the cell volume...

  13. No-signaling quantum key distribution: solution by linear programming

    Science.gov (United States)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  14. Boundary value problems of the circular cylinders in the strain-gradient theory of linear elasticity

    International Nuclear Information System (INIS)

    Kao, B.G.

    1979-11-01

    Three boundary value problems in the strain-gradient theory of linear elasticity are solved for circular cylinders. They are the twisting of circular cylinder, uniformly pressuring of concentric circular cylinder, and pure-bending of simply connected cylinder. The comparisons of these solutions with the solutions in classical elasticity and in couple-stress theory reveal the differences in the stress fields as well as the apparent stress fields due to the influences of the strain-gradient. These aspects of the strain-gradient theory could be important in modeling the failure behavior of structural materials

  15. Quantitative verification of ab initio self-consistent laser theory.

    Science.gov (United States)

    Ge, Li; Tandy, Robert J; Stone, A D; Türeci, Hakan E

    2008-10-13

    We generalize and test the recent "ab initio" self-consistent (AISC) time-independent semiclassical laser theory. This self-consistent formalism generates all the stationary lasing properties in the multimode regime (frequencies, thresholds, internal and external fields, output power and emission pattern) from simple inputs: the dielectric function of the passive cavity, the atomic transition frequency, and the transverse relaxation time of the lasing transition.We find that the theory gives excellent quantitative agreement with full time-dependent simulations of the Maxwell-Bloch equations after it has been generalized to drop the slowly-varying envelope approximation. The theory is infinite order in the non-linear hole-burning interaction; the widely used third order approximation is shown to fail badly.

  16. Three-dimensional linear peeling-ballooning theory in magnetic fusion devices

    Energy Technology Data Exchange (ETDEWEB)

    Weyens, T., E-mail: tweyens@fis.uc3m.es; Sánchez, R.; García, L. [Departamento de Física, Universidad Carlos III de Madrid, Madrid 28911 (Spain); Loarte, A.; Huijsmans, G. [ITER Organization, Route de Vinon sur Verdon, 13067 Saint Paul Lez Durance (France)

    2014-04-15

    Ideal magnetohydrodynamics theory is extended to fully 3D magnetic configurations to investigate the linear stability of intermediate to high n peeling-ballooning modes, with n the toroidal mode number. These are thought to be important for the behavior of edge localized modes and for the limit of the size of the pedestal that governs the high confinement H-mode. The end point of the derivation is a set of coupled second order ordinary differential equations with appropriate boundary conditions that minimize the perturbed energy and that can be solved to find the growth rate of the perturbations. This theory allows of the evaluation of 3D effects on edge plasma stability in tokamaks such as those associated with the toroidal ripple due to the finite number of toroidal field coils, the application of external 3D fields for elm control, local modification of the magnetic field in the vicinity of ferromagnetic components such as the test blanket modules in ITER, etc.

  17. On the derivation of the ionisation threshold law

    International Nuclear Information System (INIS)

    Peterkop, R.

    1983-01-01

    The different procedures for derivation of the electron-atom ionisation threshold law have been analysed and the reasons for discrepancies in the results are pointed out. It is shown that if the wavefunction has a linear node at equal electron distances (r 1 =r 2 ), then the threshold law for the total cross section has the form σ approx. Esup(3m), where σ approx. Esup(m) is the Wannier law. The distribution of energy between escaping electrons is non-uniform and has a parabolic node at equal energies (epsilon 1 = epsilon 2 ). The linear node at opposite directions of electrons (theta = π) does not change the Wannier law but leads to a parabolic node in angular distribution at theta = π. The existence of both nodes leads to the threshold law σ approx. Esup(3m) and to parabolic nodes in energy and angular distributions. (author)

  18. Combining linear polarization spectroscopy and the Representative Layer Theory to measure the Beer-Lambert law absorbance of highly scattering materials.

    Science.gov (United States)

    Gobrecht, Alexia; Bendoula, Ryad; Roger, Jean-Michel; Bellon-Maurel, Véronique

    2015-01-01

    Visible and Near Infrared (Vis-NIR) Spectroscopy is a powerful non destructive analytical method used to analyze major compounds in bulk materials and products and requiring no sample preparation. It is widely used in routine analysis and also in-line in industries, in-vivo with biomedical applications or in-field for agricultural and environmental applications. However, highly scattering samples subvert Beer-Lambert law's linear relationship between spectral absorbance and the concentrations. Instead of spectral pre-processing, which is commonly used by Vis-NIR spectroscopists to mitigate the scattering effect, we put forward an optical method, based on Polarized Light Spectroscopy to improve the absorbance signal measurement on highly scattering samples. This method selects part of the signal which is less impacted by scattering. The resulted signal is combined in the Absorption/Remission function defined in Dahm's Representative Layer Theory to compute an absorbance signal fulfilling Beer-Lambert's law, i.e. being linearly related to concentration of the chemicals composing the sample. The underpinning theories have been experimentally evaluated on scattering samples in liquid form and in powdered form. The method produced more accurate spectra and the Pearson's coefficient assessing the linearity between the absorbance spectra and the concentration of the added dye improved from 0.94 to 0.99 for liquid samples and 0.84-0.97 for powdered samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Optical breakdown threshold investigation of 1064 nm laser induced air plasmas

    International Nuclear Information System (INIS)

    Thiyagarajan, Magesh; Thompson, Shane

    2012-01-01

    We present the theoretical and experimental measurements and analysis of the optical breakdown threshold for dry air by 1064 nm infrared laser radiation and the significance of the multiphoton and collisional cascade ionization process on the breakdown threshold measurements over pressures range from 10 to 2000 Torr. Theoretical estimates of the breakdown threshold laser intensities and electric fields are obtained using two distinct theories namely multiphoton and collisional cascade ionization theories. The theoretical estimates are validated by experimental measurements and analysis of laser induced breakdown processes in dry air at a wavelength of 1064 nm by focusing 450 mJ max, 6 ns, 75 MW max high-power 1064 nm IR laser radiation onto a 20 μm radius spot size that produces laser intensities up to 3 - 6 TW/cm 2 , sufficient for air ionization over the pressures of interest ranging from 10 to 2000 Torr. Analysis of the measured breakdown threshold laser intensities and electric fields are carried out in relation with classical and quantum theoretical ionization processes, operating pressures. Comparative analysis of the laser air breakdown results at 1064 nm with corresponding results of a shorter laser wavelength (193 nm) [M. Thiyagarajan and J. E. Scharer, IEEE Trans. Plasma Sci. 36, 2512 (2008)] and a longer microwave wavelength (10 8 nm) [A. D. MacDonald, Microwave Breakdown in Gases (Wiley, New York, 1966)]. A universal scaling analysis of the breakdown threshold measurements provided a direct comparison of breakdown threshold values over a wide range of frequencies ranging from microwave to ultraviolet frequencies. Comparison of 1064 nm laser induced effective field intensities for air breakdown measurements with data calculated based on the collisional cascade and multiphoton breakdown theories is used successfully to determine the scaled collisional microwave portion. The measured breakdown threshold of 1064 nm laser intensities are then scaled to

  20. Thresholds of ion turbulence in tokamaks

    International Nuclear Information System (INIS)

    Garbet, X.; Laurent, L.; Mourgues, F.; Roubin, J.P.; Samain, A.; Zou, X.L.

    1991-01-01

    The linear thresholds of ionic turbulence are numerically calculated for the Tokamaks JET and TORE SUPRA. It is proved that the stability domain at η i >0 is determined by trapped ion modes and is characterized by η i ≥1 and a threshold L Ti /R of order (0.2/0.3)/(1+T i /T e ). The latter value is significantly smaller than what has been previously predicted. Experimental temperature profiles in heated discharges are usually marginal with respect to this criterium. It is also shown that the eigenmodes are low frequency, low wavenumber ballooned modes, which may produce a very large transport once the threshold ion temperature gradient is reached

  1. Numerical investigation of the inertial cavitation threshold under multi-frequency ultrasound.

    Science.gov (United States)

    Suo, Dingjie; Govind, Bala; Zhang, Shengqi; Jing, Yun

    2018-03-01

    Through the introduction of multi-frequency sonication in High Intensity Focused Ultrasound (HIFU), enhancement of efficiency has been noted in several applications including thrombolysis, tissue ablation, sonochemistry, and sonoluminescence. One key experimental observation is that multi-frequency ultrasound can help lower the inertial cavitation threshold, thereby improving the power efficiency. However, this has not been well corroborated by the theory. In this paper, a numerical investigation on the inertial cavitation threshold of microbubbles (MBs) under multi-frequency ultrasound irradiation is conducted. The relationships between the cavitation threshold and MB size at various frequencies and in different media are investigated. The results of single-, dual and triple frequency sonication show reduced inertial cavitation thresholds by introducing additional frequencies which is consistent with previous experimental work. In addition, no significant difference is observed between dual frequency sonication with various frequency differences. This study, not only reaffirms the benefit of using multi-frequency ultrasound for various applications, but also provides a possible route for optimizing ultrasound excitations for initiating inertial cavitation. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Classical Noether theory with application to the linearly damped particle

    International Nuclear Information System (INIS)

    Leone, Raphaël; Gourieux, Thierry

    2015-01-01

    This paper provides a modern presentation of Noether’s theory in the realm of classical dynamics, with application to the problem of a particle submitted to both a potential and a linear dissipation. After a review of the close relationships between Noether symmetries and first integrals, we investigate the variational point symmetries of the Lagrangian introduced by Bateman, Caldirola and Kanai. This analysis leads to the determination of all the time-independent potentials allowing such symmetries, in the one-dimensional and the radial cases. Then we develop a symmetry-based transformation of Lagrangians into autonomous others, and apply it to our problem. To be complete, we enlarge the study to Lie point symmetries which we associate logically to the Noether ones. Finally, we succinctly address the issue of a ‘weakened’ Noether’s theory, in connection with ‘on-flows’ symmetries and non-local constant of motions, because it has a direct physical interpretation in our specific problem. Since the Lagrangian we use gives rise to simple calculations, we hope that this work will be of didactic interest to graduate students, and give teaching material as well as food for thought for physicists regarding Noether’s theory and the recent developments around the idea of symmetry in classical mechanics. (paper)

  3. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?

    International Nuclear Information System (INIS)

    Little, M.P.

    2010-01-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose-response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported

  4. Threshold current for fireball generation

    Science.gov (United States)

    Dijkhuis, Geert C.

    1982-05-01

    Fireball generation from a high-intensity circuit breaker arc is interpreted here as a quantum-mechanical phenomenon caused by severe cooling of electrode material evaporating from contact surfaces. According to the proposed mechanism, quantum effects appear in the arc plasma when the radius of one magnetic flux quantum inside solid electrode material has shrunk to one London penetration length. A formula derived for the threshold discharge current preceding fireball generation is found compatible with data reported by Silberg. This formula predicts linear scaling of the threshold current with the circuit breaker's electrode radius and concentration of conduction electrons.

  5. Detecting fragmentation extinction thresholds for forest understory plant species in peninsular Spain.

    Science.gov (United States)

    Rueda, Marta; Moreno Saiz, Juan Carlos; Morales-Castilla, Ignacio; Albuquerque, Fabio S; Ferrero, Mila; Rodríguez, Miguel Á

    2015-01-01

    Ecological theory predicts that fragmentation aggravates the effects of habitat loss, yet empirical results show mixed evidences, which fail to support the theory instead reinforcing the primary importance of habitat loss. Fragmentation hypotheses have received much attention due to their potential implications for biodiversity conservation, however, animal studies have traditionally been their main focus. Here we assess variation in species sensitivity to forest amount and fragmentation and evaluate if fragmentation is related to extinction thresholds in forest understory herbs and ferns. Our expectation was that forest herbs would be more sensitive to fragmentation than ferns due to their lower dispersal capabilities. Using forest cover percentage and the proportion of this percentage occurring in the largest patch within UTM cells of 10-km resolution covering Peninsular Spain, we partitioned the effects of forest amount versus fragmentation and applied logistic regression to model occurrences of 16 species. For nine models showing robustness according to a set of quality criteria we subsequently defined two empirical fragmentation scenarios, minimum and maximum, and quantified species' sensitivity to forest contraction with no fragmentation, and to fragmentation under constant forest cover. We finally assessed how the extinction threshold of each species (the habitat amount below which it cannot persist) varies under no and maximum fragmentation. Consistent with their preference for forest habitats probability occurrences of all species decreased as forest cover contracted. On average, herbs did not show significant sensitivity to fragmentation whereas ferns were favored. In line with theory, fragmentation yielded higher extinction thresholds for two species. For the remaining species, fragmentation had either positive or non-significant effects. We interpret these differences as reflecting species-specific traits and conclude that although forest amount is of

  6. Detecting fragmentation extinction thresholds for forest understory plant species in peninsular Spain.

    Directory of Open Access Journals (Sweden)

    Marta Rueda

    Full Text Available Ecological theory predicts that fragmentation aggravates the effects of habitat loss, yet empirical results show mixed evidences, which fail to support the theory instead reinforcing the primary importance of habitat loss. Fragmentation hypotheses have received much attention due to their potential implications for biodiversity conservation, however, animal studies have traditionally been their main focus. Here we assess variation in species sensitivity to forest amount and fragmentation and evaluate if fragmentation is related to extinction thresholds in forest understory herbs and ferns. Our expectation was that forest herbs would be more sensitive to fragmentation than ferns due to their lower dispersal capabilities. Using forest cover percentage and the proportion of this percentage occurring in the largest patch within UTM cells of 10-km resolution covering Peninsular Spain, we partitioned the effects of forest amount versus fragmentation and applied logistic regression to model occurrences of 16 species. For nine models showing robustness according to a set of quality criteria we subsequently defined two empirical fragmentation scenarios, minimum and maximum, and quantified species' sensitivity to forest contraction with no fragmentation, and to fragmentation under constant forest cover. We finally assessed how the extinction threshold of each species (the habitat amount below which it cannot persist varies under no and maximum fragmentation. Consistent with their preference for forest habitats probability occurrences of all species decreased as forest cover contracted. On average, herbs did not show significant sensitivity to fragmentation whereas ferns were favored. In line with theory, fragmentation yielded higher extinction thresholds for two species. For the remaining species, fragmentation had either positive or non-significant effects. We interpret these differences as reflecting species-specific traits and conclude that although

  7. Recirculating beam-breakup thresholds for polarized higher-order modes with optical coupling

    Directory of Open Access Journals (Sweden)

    Georg H. Hoffstaetter

    2007-04-01

    Full Text Available Here we will derive the general theory of the beam-breakup (BBU instability in recirculating linear accelerators with coupled beam optics and with polarized higher-order dipole modes. The bunches do not have to be at the same radio-frequency phase during each recirculation turn. This is important for the description of energy recovery linacs (ERLs where beam currents become very large and coupled optics are used on purpose to increase the threshold current. This theory can be used for the analysis of phase errors of recirculated bunches, and of errors in the optical coupling arrangement. It is shown how the threshold current for a given linac can be computed and a remarkable agreement with tracking data is demonstrated. General formulas are then analyzed for several analytically solvable problems: (a Why can different higher order modes (HOM in one cavity couple and why can they then not be considered individually, even when their frequencies are separated by much more than the resonance widths of the HOMs? For the Cornell ERL as an example, it is noted that optimum advantage is taken of coupled optics when the cavities are designed with an x-y HOM frequency splitting of above 50 MHz. The simulated threshold current is then far above the design current of this accelerator. To justify that the simulation can represent an actual accelerator, we simulate cavities with 1 to 8 modes and show that using a limited number of modes is reasonable. (b How does the x-y coupling in the particle optics determine when modes can be considered separately? (c How much of an increase in threshold current can be obtained by coupled optics and why does the threshold current for polarized modes diminish roughly with the square root of the HOMs’ quality factors. Because of this square root scaling, polarized modes with coupled optics increase the threshold current more effectively for cavities that have rather large HOM quality factors, e.g. those without very

  8. Linear theory period ratios for surface helium enhanced double-mode Cepheids

    International Nuclear Information System (INIS)

    Cox, A.N.; Hodson, S.W.; King, D.S.

    1979-01-01

    Linear nonadiabatic theory period ratios for models of double-mode Cepheids with their two periods between 1 and 7 days have been computed, assuming differing amounts and depths of surface helium enhancement. Evolution theory masses and luminosities are found to be consistent with the observed periods. All models give Pi 1 /Pi 0 approx. =0.70 as observed for the 11 known variables, contrary to previous theoretical conclusions. The composition structure that best fits the period ratios has the helium mass fraction in the outer 10 -3 of the stellar mass (T< or =250,000 K) as 0.65, similar to a previous model for the triple-mode pulsator AC And. This enrichment can be established by a Cepheid wind and downward inverted μ gradient instability mixing in the lifetime of these low-mass classical Cepheids

  9. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  10. A non-linear field theory

    International Nuclear Information System (INIS)

    Skyrme, T.H.R.

    1994-01-01

    A unified field theory of mesons and their particle sources is proposed and considered in its classical aspects. The theory has static solutions of a singular nature, but finite energy, characterized by spin directions; the number of such entities is a rigorously conserved constant of motion; they interact with an external meson field through a derivative-type coupling with the spins, akin to the formalism of strong-coupling meson theory. There is a conserved current identifiable with isobaric spin, and another that may be related to hypercharge. The postulates include one constant of the dimensions of length, and another that is conjecture necessarily to have the value (h/2π)c, or perhaps 1/2(h/2π)c, in the quantized theory. (author). 5 refs

  11. Effects of polarization and absorption on laser induced optical breakdown threshold for skin rejuvenation

    Science.gov (United States)

    Varghese, Babu; Bonito, Valentina; Turco, Simona; Verhagen, Rieko

    2016-03-01

    Laser induced optical breakdown (LIOB) is a non-linear absorption process leading to plasma formation at locations where the threshold irradiance for breakdown is surpassed. In this paper we experimentally demonstrate the influence of polarization and absorption on laser induced breakdown threshold in transparent, absorbing and scattering phantoms made from water suspensions of polystyrene microspheres. We demonstrate that radially polarized light yields a lower irradiance threshold for creating optical breakdown compared to linearly polarized light. We also demonstrate that the thermal initiation pathway used for generating seed electrons results in a lower irradiance threshold compared to multiphoton initiation pathway used for optical breakdown.

  12. Optimization Problems on Threshold Graphs

    Directory of Open Access Journals (Sweden)

    Elena Nechita

    2010-06-01

    Full Text Available During the last three decades, different types of decompositions have been processed in the field of graph theory. Among these we mention: decompositions based on the additivity of some characteristics of the graph, decompositions where the adjacency law between the subsets of the partition is known, decompositions where the subgraph induced by every subset of the partition must have predeterminate properties, as well as combinations of such decompositions. In this paper we characterize threshold graphs using the weakly decomposition, determine: density and stability number, Wiener index and Wiener polynomial for threshold graphs.

  13. Mirror structures above and below the linear instability threshold: Cluster observations, fluid model and hybrid simulations

    Directory of Open Access Journals (Sweden)

    V. Génot

    2009-02-01

    Full Text Available Using 5 years of Cluster data, we present a detailed statistical analysis of magnetic fluctuations associated with mirror structures in the magnetosheath. We especially focus on the shape of these fluctuations which, in addition to quasi-sinusoidal forms, also display deep holes and high peaks. The occurrence frequency and the most probable location of the various types of structures is discussed, together with their relation to local plasma parameters. While these properties have previously been correlated to the β of the plasma, we emphasize here the influence of the distance to the linear mirror instability threshold. This enables us to interpret the observations of mirror structures in a stable plasma in terms of bistability and subcritical bifurcation. The data analysis is supplemented by the prediction of a quasi-static anisotropic MHD model and hybrid numerical simulations in an expanding box aimed at mimicking the magnetosheath plasma. This leads us to suggest a scenario for the formation and evolution of mirror structures.

  14. Linear algebra and group theory

    CERN Document Server

    Smirnov, VI

    2011-01-01

    This accessible text by a Soviet mathematician features material not otherwise available to English-language readers. Its three-part treatment covers determinants and systems of equations, matrix theory, and group theory. 1961 edition.

  15. Common misinterpretations of the 'linear, no-threshold' relationship used in radiation protection

    International Nuclear Information System (INIS)

    Bond, V.P.; Sondhaus, C.A.

    1987-01-01

    Absorbed dose D is shown to be a composite variable, the product of the fraction of cells hit (I H ) and the mean ''dose'' (hit size) anti z to those cells. D is suitable for use with high level exposure (HLE) to radiation and its resulting acute organ effects because, since I H =1.0, it approximates closely enough the mean energy density in the cell as well as in the organ. However, the low level exposure (LLE) to radiation and its consequent probability of cancer induction from a single cell, stochastic delivery of energy to cells results in a wide distribution of hit sizes z, and the expected mean value, anti z, is constant with exposure. Thus, with LLE, only I H varies with D so that the apparent proportionality between ''dose'' and the fraction of cells transformed is misleading. This proportionality therefore does not mean that any (cell) dose, no matter how small, can be lethal. Rather, it means that, in the exposure of a population of individual organisms consisting of the constituent relevant cells, there is a small probability of particle-cell ineractions which transfer energy. The probability of a cell transforming and initiating a cancer can only be greater than zero if the hi t size (''dose'') to the cell is large enough. Otherwise stated, if the ''dose'' is defined at the proper level of biological organization, namely, the cell and not the organ, only a large dose z to that cell is effective. (orig.)

  16. Sequential double excitations from linear-response time-dependent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Mosquera, Martín A.; Ratner, Mark A.; Schatz, George C., E-mail: g-schatz@northwestern.edu [Department of Chemistry, Northwestern University, 2145 Sheridan Rd., Evanston, Illinois 60208 (United States); Chen, Lin X. [Department of Chemistry, Northwestern University, 2145 Sheridan Rd., Evanston, Illinois 60208 (United States); Chemical Sciences and Engineering Division, Argonne National Laboratory, 9700 South Cass Ave., Lemont, Illinois 60439 (United States)

    2016-05-28

    Traditional UV/vis and X-ray spectroscopies focus mainly on the study of excitations starting exclusively from electronic ground states. However there are many experiments where transitions from excited states, both absorption and emission, are probed. In this work we develop a formalism based on linear-response time-dependent density functional theory to investigate spectroscopic properties of excited states. We apply our model to study the excited-state absorption of a diplatinum(II) complex under X-rays, and transient vis/UV absorption of pyrene and azobenzene.

  17. The flow analysis of supercavitating cascade by linear theory

    Energy Technology Data Exchange (ETDEWEB)

    Park, E.T. [Sung Kyun Kwan Univ., Seoul (Korea, Republic of); Hwang, Y. [Seoul National Univ., Seoul (Korea, Republic of)

    1996-06-01

    In order to reduce damages due to cavitation effects and to improve performance of fluid machinery, supercavitation around the cascade and the hydraulic characteristics of supercavitating cascade must be analyzed accurately. And the study on the effects of cavitation on fluid machinery and analysis on the performances of supercavitating hydrofoil through various elements governing flow field are critically important. In this study comparison of experiment results with the computed results of linear theory using singularity method was obtainable. Specially singularity points like sources and vortexes on hydrofoil and freestreamline were distributed to analyze two dimensional flow field of supercavitating cascade, and governing equations of flow field were derived and hydraulic characteristics of cascade were calculated by numerical analysis of the governing equations. 7 refs., 6 figs.

  18. Neuronal spike-train responses in the presence of threshold noise.

    Science.gov (United States)

    Coombes, S; Thul, R; Laudanski, J; Palmer, A R; Sumner, C J

    2011-03-01

    The variability of neuronal firing has been an intense topic of study for many years. From a modelling perspective it has often been studied in conductance based spiking models with the use of additive or multiplicative noise terms to represent channel fluctuations or the stochastic nature of neurotransmitter release. Here we propose an alternative approach using a simple leaky integrate-and-fire model with a noisy threshold. Initially, we develop a mathematical treatment of the neuronal response to periodic forcing using tools from linear response theory and use this to highlight how a noisy threshold can enhance downstream signal reconstruction. We further develop a more general framework for understanding the responses to large amplitude forcing based on a calculation of first passage times. This is ideally suited to understanding stochastic mode-locking, for which we numerically determine the Arnol'd tongue structure. An examination of data from regularly firing stellate neurons within the ventral cochlear nucleus, responding to sinusoidally amplitude modulated pure tones, shows tongue structures consistent with these predictions and highlights that stochastic, as opposed to deterministic, mode-locking is utilised at the level of the single stellate cell to faithfully encode periodic stimuli.

  19. Threshold law for positron-atom impact ionisation

    International Nuclear Information System (INIS)

    Temkin, A.

    1982-01-01

    The threshold law for ionisation of atoms by positron impact is adduced in analogy with the author's approach to the electron-atom ionisation. It is concluded the Coulomb-dipole region of potential gives the essential part of the interaction in both cases and leads to the same kind of result: a modulated linear law. An additional process which enters positron ionisation is positronium formation in the continuum, but that will not dominate the threshold yield. The result is in sharp contrast to the positron threshold law as recently derived by Klar (J. Phys. B.; 14:4165 (1981)) on the basis of a Wannier-type (Phys. Rev.; 90:817 (1953)) analysis. (author)

  20. High-damage-threshold static laser beam shaping using optically patterned liquid-crystal devices.

    Science.gov (United States)

    Dorrer, C; Wei, S K-H; Leung, P; Vargas, M; Wegman, K; Boulé, J; Zhao, Z; Marshall, K L; Chen, S H

    2011-10-15

    Beam shaping of coherent laser beams is demonstrated using liquid crystal (LC) cells with optically patterned pixels. The twist angle of a nematic LC is locally set to either 0 or 90° by an alignment layer prepared via exposure to polarized UV light. The two distinct pixel types induce either no polarization rotation or a 90° polarization rotation, respectively, on a linearly polarized optical field. An LC device placed between polarizers functions as a binary transmission beam shaper with a highly improved damage threshold compared to metal beam shapers. Using a coumarin-based photoalignment layer, various devices have been fabricated and tested, with a measured single-shot nanosecond damage threshold higher than 30 J/cm2.

  1. Linear systems formulation of scattering theory for rough surfaces with arbitrary incident and scattering angles.

    Science.gov (United States)

    Krywonos, Andrey; Harvey, James E; Choi, Narak

    2011-06-01

    Scattering effects from microtopographic surface roughness are merely nonparaxial diffraction phenomena resulting from random phase variations in the reflected or transmitted wavefront. Rayleigh-Rice, Beckmann-Kirchhoff. or Harvey-Shack surface scatter theories are commonly used to predict surface scatter effects. Smooth-surface and/or paraxial approximations have severely limited the range of applicability of each of the above theoretical treatments. A recent linear systems formulation of nonparaxial scalar diffraction theory applied to surface scatter phenomena resulted first in an empirically modified Beckmann-Kirchhoff surface scatter model, then a generalized Harvey-Shack theory that produces accurate results for rougher surfaces than the Rayleigh-Rice theory and for larger incident and scattered angles than the classical Beckmann-Kirchhoff and the original Harvey-Shack theories. These new developments simplify the analysis and understanding of nonintuitive scattering behavior from rough surfaces illuminated at arbitrary incident angles.

  2. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Science.gov (United States)

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions

  3. Linear theory of the tearing instability in axisymmetric toroidal devices

    International Nuclear Information System (INIS)

    Rogister, A.; Singh, R.

    1988-08-01

    We derive a very general kinetic equation describing the linear evolution of low m/l modes in axisymmetric toroidal plasmas with arbitrary cross sections. Included are: Ion sound, inertia, diamagnetic drifts, finite poloidal beta, and finite ion Larmor radius effects. Assuming the magnetic surfaces to form a set of nested tori with circular cross sections of shifted centers, and introducing adequate simplifications justified by our knowledge of experimental tokamak plasmas, we then obtain explicitely the sets of equations describing the coupling of the quasimodes 0/1, 1/1, 2/1, and, for m≥2, m/1, (m+1)/1. By keeping finite aspect ratio effects into account when calculating the jump of the derivative of the eigenfunction, it is shown that the theory can explain the rapid evolution, within one sawtooth period, of the growth rate of the sawteeth precursors from resistive values to magnetohydrodynamic ones. The characteristics thus theoretically required from current profiles in sawtoothing discharges have clearly been observed. Other aspects of the full theory could be relevant to the phenomenon of major disruptions. (orig.)

  4. T-Cell Activation: A Queuing Theory Analysis at Low Agonist Density

    OpenAIRE

    Wedagedera, J. R.; Burroughs, N. J.

    2006-01-01

    We analyze a simple linear triggering model of the T-cell receptor (TCR) within the framework of queuing theory, in which TCRs enter the queue upon full activation and exit by downregulation. We fit our model to four experimentally characterized threshold activation criteria and analyze their specificity and sensitivity: the initial calcium spike, cytotoxicity, immunological synapse formation, and cytokine secretion. Specificity characteristics improve as the time window for detection increas...

  5. Selected topics in the quantum theory of solids: collective excitations and linear response

    International Nuclear Information System (INIS)

    Balakrishnan, V.

    1977-08-01

    This report is based on the lecture notes of a course given at the Department of Physics, Indian Institute of Technology, Madras, during the period January-April 1976 for M.Sc. students. The emphasis is on the concept of elementary excitations in many-body systems, and on the technique of linear response theory. Various topics are covered in 7 sections. The second section following the introductory section is on 'second quantization' and includes discussion on creation and destruction operators, multiparticle states, time-dependent operators etc. Section 3 deals with the 'electron gas' and includes discussion on non-interacting Fermi gas, Coulomb interaction and exchange energy, the two-electron correlation function etc. Section 4 deals with the dielectric response analysis of the electron gas and includes discussion on Coulomb interaction in terms of density fluctuations, self-consistent field dielectric function etc. In section 5 the 'linear response theory' is explained. The Liouville operator, Boltzmann's superposition integral, dispersion relations etc. are explained. Quasiparticles and plasmous are discussed in the Section 6. Section 7 deals with 'lattice dynamics and phonons'. In the last section 8, spin waves are explained. The Heisenberg exchange hamiltonian, Green Function for noninteracting magnons etc. are discussed. (author)

  6. Análise do efeito não linear do patrimônio líquido no apreçamento de fundos de investimento em ações

    Directory of Open Access Journals (Sweden)

    Paulo Rogério Faustino Matos

    2012-01-01

    Full Text Available Este artigo faz uso do Capital Asset Pricing Model (CAPM, em sua versão canônica e com extensões não lineares, visando a apreçar um painel de 75 fundos de investimento em ações no Brasil, ao longo dos últimos 11 anos. O resultado sugere que a versão linear desse arcabouço não seja capaz de apreçar ou de prever retornos reais de fundos que possuam elevados patrimônio líquido (PL e outperformance, em relação ao índice da Bolsa de Valores de São Paulo (Ibovespa, corroborando evidências anteriores. A versão não linear com thresholds baseados no PL parece lidar melhor com a questão de alfas de Jensen significativos, apesar de ser estatisticamente indicada apenas para poucos fundos com elevado PL, mas baixa outperformance. Essa é uma evidência de que, apesar de o tamanho influenciar na gestão e, possivelmente, na performance de um fundo, a modelagem de apreçamento desse efeito deve ser feita linearmente.

  7. A threshold for dissipative fission

    International Nuclear Information System (INIS)

    Thoennessen, M.; Bertsch, G.F.

    1993-01-01

    The empirical domain of validity of statistical theory is examined as applied to fission data on pre-fission data on pre-fission neutron, charged particle, and γ-ray multiplicities. Systematics are found of the threshold excitation energy for the appearance of nonstatistical fission. From the data on systems with not too high fissility, the relevant phenomenological parameter is the ratio of the threshold temperature T thresh to the (temperature-dependent) fission barrier height E Bar (T). The statistical model reproduces the data for T thresh /E Bar (T) thresh /E Bar (T) independent of mass and fissility of the systems

  8. Threshold Law For Positron Impact Ionization of Atoms

    International Nuclear Information System (INIS)

    Ihra, W.; Mota-Furtado, F.; OMahony, P.F.; Macek, J.H.; Macek, J.H.

    1997-01-01

    We demonstrate that recent experiments for positron impact ionization of He and H 2 can be interpreted by extending Wannier theory to higher energies. Anharmonicities in the expansion of the three-particle potential around the Wannier configuration give rise to corrections in the threshold behavior of the breakup cross section. These corrections are taken into account perturbatively by employing the hidden crossing theory. The resulting threshold law is σ(E)∝E 2.640 exp[ -0.73√(E)] . The actual energy range for which the Wannier law is valid is found to be smaller for positron impact ionization than for electron impact ionization. copyright 1997 The American Physical Society

  9. Multipactor threshold calculation of coaxial transmission lines in microwave applications with nonstationary statistical theory

    International Nuclear Information System (INIS)

    Lin, S.; Li, Y.; Liu, C.; Wang, H.; Zhang, N.; Cui, W.; Neuber, A.

    2015-01-01

    This paper presents a statistical theory for the initial onset of multipactor breakdown in coaxial transmission lines, taking both the nonuniform electric field and random electron emission velocity into account. A general numerical method is first developed to construct the joint probability density function based on the approximate equation of the electron trajectory. The nonstationary dynamics of the multipactor process on both surfaces of coaxial lines are modelled based on the probability of various impacts and their corresponding secondary emission. The resonant assumption of the classical theory on the independent double-sided and single-sided impacts is replaced by the consideration of their interaction. As a result, the time evolutions of the electron population for exponential growth and absorption on both inner and outer conductor, in response to the applied voltage above and below the multipactor breakdown level, are obtained to investigate the exact mechanism of multipactor discharge in coaxial lines. Furthermore, the multipactor threshold predictions of the presented model are compared with experimental results using measured secondary emission yield of the tested samples which shows reasonable agreement. Finally, the detailed impact scenario reveals that single-surface multipactor is more likely to occur with a higher outer to inner conductor radius ratio

  10. Linear versus non-linear: a perspective from health physics and radiobiology

    International Nuclear Information System (INIS)

    Gentner, N.E.; Osborne, R.V.

    1998-01-01

    There is a vigorous debate about whether or not there may be a 'threshold' for radiation-induced adverse health effects. A linear-no threshold (LNT) model allows radiation protection practitioners to manage putative risk consistently, because different types of exposure, exposures at different times, and exposures to different organs may be summed. If we are to argue to regulators and the public that low doses are less dangerous than we presently assume, it is incumbent on us to prove this. The question is, therefore, whether any consonant body of evidence exists that the risk of low doses has been over-estimated. From the perspectives of both health physics and radiobiology, we conclude that the evidence for linearity at high doses (and arguably of fairly small total doses if delivered at high dose rate) is strong. For low doses (or in fact, even for fairly high doses) delivered at low dose rate, the evidence is much less compelling. Since statistical limitations at low doses are almost always going to prevent a definitive answer, one way or the other, from human data, we need a way out of this epistemological dilemma of 'LNT or not LNT, that is the question'. To our minds, the path forward is to exploit (1) radiobiological studies which address directly the question of what the dose and dose rate effectiveness factor is in actual human bodies exposed to low-level radiation, in concert with (2) epidemiological studies of human populations exposed to fairly high doses (to obtain statistical power) but where exposure was protracted over some years. (author)

  11. Hadronic equation of state in the statistical bootstrap model and linear graph theory

    International Nuclear Information System (INIS)

    Fre, P.; Page, R.

    1976-01-01

    Taking a statistical mechanical point og view, the statistical bootstrap model is discussed and, from a critical analysis of the bootstrap volume comcept, it is reached a physical ipothesis, which leads immediately to the hadronic equation of state provided by the bootstrap integral equation. In this context also the connection between the statistical bootstrap and the linear graph theory approach to interacting gases is analyzed

  12. Linearizing feedforward/feedback attitude control

    Science.gov (United States)

    Paielli, Russell A.; Bach, Ralph E.

    1991-01-01

    An approach to attitude control theory is introduced in which a linear form is postulated for the closed-loop rotation error dynamics, then the exact control law required to realize it is derived. The nonminimal (four-component) quaternion form is used to attitude because it is globally nonsingular, but the minimal (three-component) quaternion form is used for attitude error because it has no nonlinear constraints to prevent the rotational error dynamics from being linearized, and the definition of the attitude error is based on quaternion algebra. This approach produces an attitude control law that linearizes the closed-loop rotational error dynamics exactly, without any attitude singularities, even if the control errors become large.

  13. Melanin microcavitation threshold in the near infrared

    Science.gov (United States)

    Schmidt, Morgan S.; Kennedy, Paul K.; Vincelette, Rebecca L.; Schuster, Kurt J.; Noojin, Gary D.; Wharmby, Andrew W.; Thomas, Robert J.; Rockwell, Benjamin A.

    2014-02-01

    Thresholds for microcavitation of isolated bovine and porcine melanosomes were determined using single nanosecond (ns) laser pulses in the NIR (1000 - 1319 nm) wavelength regime. Average fluence thresholds for microcavitation increased non-linearly with increasing wavelength. Average fluence thresholds were also measured for 10-ns pulses at 532 nm, and found to be comparable to visible ns pulse values published in previous reports. Fluence thresholds were used to calculate melanosome absorption coefficients, which decreased with increasing wavelength. This trend was found to be comparable to the decrease in retinal pigmented epithelial (RPE) layer absorption coefficients reported over the same wavelength region. Estimated corneal total intraocular energy (TIE) values were determined and compared to the current and proposed maximum permissible exposure (MPE) safe exposure levels. Results from this study support the proposed changes to the MPE levels.

  14. Non-linear time series analysis on flow instability of natural circulation under rolling motion condition

    International Nuclear Information System (INIS)

    Zhang, Wenchao; Tan, Sichao; Gao, Puzhen; Wang, Zhanwei; Zhang, Liansheng; Zhang, Hong

    2014-01-01

    Highlights: • Natural circulation flow instabilities in rolling motion are studied. • The method of non-linear time series analysis is used. • Non-linear evolution characteristic of flow instability is analyzed. • Irregular complex flow oscillations are chaotic oscillations. • The effect of rolling parameter on the threshold of chaotic oscillation is studied. - Abstract: Non-linear characteristics of natural circulation flow instabilities under rolling motion conditions were studied by the method of non-linear time series analysis. Experimental flow time series of different dimensionless power and rolling parameters were analyzed based on phase space reconstruction theory. Attractors which were reconstructed in phase space and the geometric invariants, including correlation dimension, Kolmogorov entropy and largest Lyapunov exponent, were determined. Non-linear characteristics of natural circulation flow instabilities under rolling motion conditions was studied based on the results of the geometric invariant analysis. The results indicated that the values of the geometric invariants first increase and then decrease as dimensionless power increases which indicated the non-linear characteristics of the system first enhance and then weaken. The irregular complex flow oscillation is typical chaotic oscillation because the value of geometric invariants is at maximum. The threshold of chaotic oscillation becomes larger as the rolling frequency or rolling amplitude becomes big. The main influencing factors that influence the non-linear characteristics of the natural circulation system under rolling motion are thermal driving force, flow resistance and the additional forces caused by rolling motion. The non-linear characteristics of the natural circulation system under rolling motion changes caused by the change of the feedback and coupling degree among these influencing factors when the dimensionless power or rolling parameters changes

  15. Threshold Characteristics of Slow-Light Photonic Crystal Lasers

    DEFF Research Database (Denmark)

    Xue, Weiqi; Yu, Yi; Ottaviano, Luisa

    2016-01-01

    The threshold properties of photonic crystal quantum dot lasers operating in the slow-light regime are investigated experimentally and theoretically. Measurements show that, in contrast to conventional lasers, the threshold gain attains a minimum value for a specific cavity length. The experimental...... results are explained by an analytical theory for the laser threshold that takes into account the effects of slow light and random disorder due to unavoidable fabrication imperfections. Longer lasers are found to operate deeper into the slow-light region, leading to a trade-off between slow-light induced...

  16. Phi-value analysis of a linear, sequential reaction mechanism: theory and application to ion channel gating.

    Science.gov (United States)

    Zhou, Yu; Pearson, John E; Auerbach, Anthony

    2005-12-01

    We derive the analytical form of a rate-equilibrium free-energy relationship (with slope Phi) for a bounded, linear chain of coupled reactions having arbitrary connecting rate constants. The results confirm previous simulation studies showing that Phi-values reflect the position of the perturbed reaction within the chain, with reactions occurring earlier in the sequence producing higher Phi-values than those occurring later in the sequence. The derivation includes an expression for the transmission coefficients of the overall reaction based on the rate constants of an arbitrary, discrete, finite Markov chain. The results indicate that experimental Phi-values can be used to calculate the relative heights of the energy barriers between intermediate states of the chain but provide no information about the energies of the wells along the reaction path. Application of the equations to the case of diliganded acetylcholine receptor channel gating suggests that the transition-state ensemble for this reaction is nearly flat. Although this mechanism accounts for many of the basic features of diliganded and unliganded acetylcholine receptor channel gating, the experimental rate-equilibrium free-energy relationships appear to be more linear than those predicted by the theory.

  17. Linear Programming (LP)

    International Nuclear Information System (INIS)

    Rogner, H.H.

    1989-01-01

    The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig

  18. An analogue of Morse theory for planar linear networks and the generalized Steiner problem

    International Nuclear Information System (INIS)

    Karpunin, G A

    2000-01-01

    A study is made of the generalized Steiner problem: the problem of finding all the locally minimal networks spanning a given boundary set (terminal set). It is proposed to solve this problem by using an analogue of Morse theory developed here for planar linear networks. The space K of all planar linear networks spanning a given boundary set is constructed. The concept of a critical point and its index is defined for the length function l of a planar linear network. It is shown that locally minimal networks are local minima of l on K and are critical points of index 1. The theorem is proved that the sum of the indices of all the critical points is equal to χ(K)=1. This theorem is used to find estimates for the number of locally minimal networks spanning a given boundary set

  19. Analytic theory of the gyrotron

    International Nuclear Information System (INIS)

    Lentini, P.J.

    1989-06-01

    An analytic theory is derived for a gyrotron operating in the linear gain regime. The gyrotron is a coherent source of microwave and millimeter wave radiation based on an electron beam emitting at cyclotron resonance Ω in a strong, uniform magnetic field. Relativistic equations of motion and first order perturbation theory are used. Results are obtained in both laboratory and normalized variables. An expression for cavity threshold gain is derived in the linear regime. An analytic expression for the electron phase angle in momentum space shows that the effect of the RF field is to form bunches that are equal to the unperturbed transit phase plus a correction term which varies as the sine of the input phase angle. The expression for the phase angle is plotted and bunching effects in and out of phase (0 and -π) with respect to the RF field are evident for detunings leading to gain and absorption, respectively. For exact resonance, field frequency ω = Ω, a bunch also forms at a phase of -π/2. This beam yields the same energy exchange with the RF field as an unbunched, (nonrelativistic) beam. 6 refs., 10 figs

  20. Schubert calculus and threshold polynomials of affine fusion

    International Nuclear Information System (INIS)

    Irvine, S.E.; Walton, M.A.

    2000-01-01

    We show how the threshold level of affine fusion, the fusion of Wess-Zumino-Witten (WZW) conformal field theories, fits into the Schubert calculus introduced by Gepner. The Pieri rule can be modified in a simple way to include the threshold level, so that calculations may be done for all (non-negative integer) levels at once. With the usual Giambelli formula, the modified Pieri formula deforms the tensor product coefficients (and the fusion coefficients) into what we call threshold polynomials. We compare them with the q-deformed tensor product coefficients and fusion coefficients that are related to q-deformed weight multiplicities. We also discuss the meaning of the threshold level in the context of paths on graphs

  1. Linear optical response of finite systems using multishift linear system solvers

    Energy Technology Data Exchange (ETDEWEB)

    Hübener, Hannes; Giustino, Feliciano [Department of Materials, University of Oxford, Oxford OX1 3PH (United Kingdom)

    2014-07-28

    We discuss the application of multishift linear system solvers to linear-response time-dependent density functional theory. Using this technique the complete frequency-dependent electronic density response of finite systems to an external perturbation can be calculated at the cost of a single solution of a linear system via conjugate gradients. We show that multishift time-dependent density functional theory yields excitation energies and oscillator strengths in perfect agreement with the standard diagonalization of the response matrix (Casida's method), while being computationally advantageous. We present test calculations for benzene, porphin, and chlorophyll molecules. We argue that multishift solvers may find broad applicability in the context of excited-state calculations within density-functional theory and beyond.

  2. Airfoil wake and linear theory gust response including sub and superresonant flow conditions

    Science.gov (United States)

    Henderson, Gregory H.; Fleeter, Sanford

    1992-01-01

    The unsteady aerodynamic gust response of a high solidity stator vane row is examined in terms of the fundamental gust modeling assumptions with particular attention given to the effects near an acoustic resonance. A series of experiments was performed with gusts generated by rotors comprised of perforated plates and airfoils. It is concluded that, for both the perforated plate and airfoil wake generated gusts, the unsteady pressure responses do not agree with the linear-theory gust predictions near an acoustic resonance. The effects of the acoustic resonance phenomena are clearly evident on the airfoil surface unsteady pressure responses. The transition of the measured lift coefficients across the acoustic resonance from the subresonant regime to the superresonant regime occurs in a simple linear fashion.

  3. Introducing Linear Functions: An Alternative Statistical Approach

    Science.gov (United States)

    Nolan, Caroline; Herbert, Sandra

    2015-01-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…

  4. Double photoionization of helium near threshold

    International Nuclear Information System (INIS)

    Levin, J.C.; Armen, G.B.; Sellin, I.A.

    1996-01-01

    There has been substantial recent experimental interest in the ratio of double-to-single photoionization of He near threshold following several theoretical observations that earlier measurements appear to overestimate the ratio, perhaps by as much as 25%, in the first several hundred eV above threshold. The authors recent measurements are 10%-15% below these earlier results and more recent results of Doerner et al. and Samson et al. are yet another 10% lower. The authors will compare these measurement with new data, not yet analyzed, and available theory

  5. Cosmic no-hair in Brans-Dicke theory

    International Nuclear Information System (INIS)

    Guzman, E.; Alam, S.

    1993-08-01

    In this short note we report our finding that within the context of alternative version of the Brans-Dicke theory (for ω ≥ -3/2, where ω is the Brans-Dicke parameter) the anisotropic Bianchi type cosmological models evolve towards the de Sitter isotropic universe. In short it is shown that during inflation there is no difference between the Brans-Dicke theory and General Relativity. Our result can thus be viewed as a generalization of the Wald's theorem for General Relativity. (author). 5 refs

  6. ‘Soglitude’- introducing a method of thinking thresholds

    Directory of Open Access Journals (Sweden)

    Tatjana Barazon

    2010-04-01

    Full Text Available ‘Soglitude’ is an invitation to acknowledge the existence of thresholds in thought. A threshold in thought designates the indetermination, the passage, the evolution of every state the world is in. The creation we add to it, and the objectivity we suppose, on the border of those two ideas lies our perceptive threshold. No state will ever be permanent, and in order to stress the temporary, fluent character of the world and our perception of it, we want to introduce a new suitable method to think change and transformation, when we acknowledge our own threshold nature. The contributions gathered in this special issue come from various disciplines: anthropology, philosophy, critical theory, film studies, political science, literature and history. The variety of these insights shows the resonance of the idea of threshold in every category of thought. We hope to enlarge the notion in further issues on physics and chemistry, as well as mathematics. The articles in this issue introduce the method of threshold thinking by showing the importance of the in-between, of the changing of perspective in their respective domain. The ‘Documents’ section named INTERSTICES, includes a selection of poems, two essays, a philosophical-artistic project called ‘infraphysique’, a performance on thresholds in the soul, and a dialogue with Israel Rosenfield. This issue presents a kaleidoscope of possible threshold thinking and hopes to initiate new ways of looking at things.For every change that occurs in reality there is a subjective counterpart in our perception and this needs to be acknowledged as such. What we name objective is reflected in our own personal perception in its own personal manner, in such a way that the objectivity of an event might altogether be questioned. The absolute point of view, the view from “nowhere”, could well be the projection that causes dogmatism. By introducing the method of thinking thresholds into a system, be it

  7. Disaggregated energy consumption and GDP in Taiwan: A threshold co-integration analysis

    International Nuclear Information System (INIS)

    Hu, J.-L.; Lin, C.-H.

    2008-01-01

    Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan

  8. Nuclear thermodynamics below particle threshold

    International Nuclear Information System (INIS)

    Schiller, A.; Agvaanluvsan, U.; Algin, E.; Bagheri, A.; Chankova, R.; Guttormsen, M.; Hjorth-Jensen, M.; Rekstad, J.; Siem, S.; Sunde, A. C.; Voinov, A.

    2005-01-01

    From a starting point of experimentally measured nuclear level densities, we discuss thermodynamical properties of nuclei below the particle emission threshold. Since nuclei are essentially mesoscopic systems, a straightforward generalization of macroscopic ensemble theory often yields unphysical results. A careful critique of traditional thermodynamical concepts reveals problems commonly encountered in mesoscopic systems. One of which is the fact that microcanonical and canonical ensemble theory yield different results, another concerns the introduction of temperature for small, closed systems. Finally, the concept of phase transitions is investigated for mesoscopic systems

  9. The H-mode power threshold in JET

    Energy Technology Data Exchange (ETDEWEB)

    Start, D F.H.; Bhatnagar, V P; Campbell, D J; Cordey, J G; Esch, H P.L. de; Gormezano, C; Hawkes, N; Horton, L; Jones, T T.C.; Lomas, P J; Lowry, C; Righi, E; Rimini, F G; Saibene, G; Sartori, R; Sips, G; Stork, D; Thomas, P; Thomsen, K; Tubbing, B J.D.; Von Hellermann, M; Ward, D J [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking

    1994-07-01

    New H-mode threshold data over a range of toroidal field and density values have been obtained from the present campaign. The scaling with n{sub e} B{sub t} is almost identical with that of the 91/92 period for the same discharge conditions. The scaling with toroidal field alone gives somewhat higher thresholds than the older data. The 1991/2 database shows a scaling of P{sub th} (power threshold) with n{sub e} B{sub t} which is approximately linear and agrees well with that observed on other tokamaks. For NBI and carbon target tiles the threshold power is a factor of two higher with the ion {Nu}B drift away from the target compared with the value found with the drift towards the target. The combination of ICRH and beryllium tiles appears to be beneficial for reducing P{sub th}. The power threshold is largely insensitive to plasma current, X-point height and distance between the last closed flux surface and the limiter, at least for values greater than 2 cm. (authors). 3 refs., 6 figs.

  10. Two-point theory of current-driven ion-cyclotron turbulence

    International Nuclear Information System (INIS)

    Chiueh, T.; Diamond, P.H.

    1985-02-01

    An analytical theory of current-driven ion-cyclotron turbulenc which treats incoherent phase space density granulations (clumps) is presented. In contrast to previous investigations, attention is focused on the physically relevant regime of weak collective dissipation, where waves and clumps coexist. The threshold current for nonlinear instability is calculated, and is found to deviate from the linear threshold. A necessary condition for the existence of stationary wave-clump turbulence is derived, and shown to be analogous to the test particle model fluctuation-dissipation theorem result. The structure of three dimensional magnetized clumps is characterized. It is proposed that instability is saturated by collective dissipation due to ion-wave scattering. For this wave-clump turbulence regime, it is found that the fluctuation level (e psi/T/sub e/)/sub rms/ less than or equal to 0.1, and that the modification of anomalous resistivity to levels predicted by conventional nonlinear wave theories is moderate. It is also shown that, in marked contrast to the quasilinear prediction, ion heating significantly exceeds electron heating

  11. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  12. Non-linear σ-models and string theories

    International Nuclear Information System (INIS)

    Sen, A.

    1986-10-01

    The connection between σ-models and string theories is discussed, as well as how the σ-models can be used as tools to prove various results in string theories. Closed bosonic string theory in the light cone gauge is very briefly introduced. Then, closed bosonic string theory in the presence of massless background fields is discussed. The light cone gauge is used, and it is shown that in order to obtain a Lorentz invariant theory, the string theory in the presence of background fields must be described by a two-dimensional conformally invariant theory. The resulting constraints on the background fields are found to be the equations of motion of the string theory. The analysis is extended to the case of the heterotic string theory and the superstring theory in the presence of the massless background fields. It is then shown how to use these results to obtain nontrivial solutions to the string field equations. Another application of these results is shown, namely to prove that the effective cosmological constant after compactification vanishes as a consequence of the classical equations of motion of the string theory. 34 refs

  13. Relevance of sampling schemes in light of Ruelle's linear response theory

    International Nuclear Information System (INIS)

    Lucarini, Valerio; Wouters, Jeroen; Faranda, Davide; Kuna, Tobias

    2012-01-01

    We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We first show that using a general functional decomposition for space–time dependent forcings, we can define elementary susceptibilities that allow us to construct the linear response of the system to general perturbations. Starting from the definition of SRB measure, we then study the consequence of taking different sampling schemes for analysing the response of the system. We show that only a specific choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows us to obtain the formula first presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be fine-tuned to make the definition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analysing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick

  14. Neoclassical viscous stress tensor for non-linear MHD simulations with XTOR-2F

    International Nuclear Information System (INIS)

    Mellet, N.; Maget, P.; Meshcheriakov, D.; Lütjens, H.

    2013-01-01

    The neoclassical viscous stress tensor is implemented in the non-linear MHD code XTOR-2F (Lütjens and Luciani 2010 J. Comput. Phys. 229 8130–43), allowing consistent bi-fluid simulations of MHD modes, including the metastable branch of neoclassical tearing modes (NTMs) (Carrera et al 1986 Phys. Fluids 29 899–902). Equilibrium flows and bootstrap current from the neoclassical theory are formally recovered in this Chew–Goldberger–Low formulation. The non-linear behaviour of the new model is verified on a test case coming from a Tore Supra non-inductive discharge. A NTM threshold that is larger than with the previous model is obtained. This is due to the fact that the velocity is now part of the bootstrap current and that it differs from the theoretical neoclassical value. (paper)

  15. Bioclimatic Thresholds, Thermal Constants and Survival of Mealybug, Phenacoccus solenopsis (Hemiptera: Pseudococcidae) in Response to Constant Temperatures on Hibiscus

    Science.gov (United States)

    Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi

    2013-01-01

    Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597

  16. Bioclimatic thresholds, thermal constants and survival of mealybug, Phenacoccus solenopsis (hemiptera: pseudococcidae) in response to constant temperatures on hibiscus.

    Science.gov (United States)

    Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi

    2013-01-01

    Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus (Hibiscusrosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P. solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai's linear model permitted testing the equivalence of lower developmental thresholds for life stages of P. solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P. solenopsis. The estimated bioclimatic thresholds and the observed survival rates of P. solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P. solenopsis on its host plants.

  17. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  18. Non-linear gauge transformations in D=10 SYM theory and the BCJ duality

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seungjin [Max-Planck-Institut für Gravitationsphysik Albert-Einstein-Institut,14476 Potsdam (Germany); Mafra, Carlos R. [Institute for Advanced Study, School of Natural Sciences,Einstein Drive, Princeton, NJ 08540 (United States); DAMTP, University of Cambridge,Wilberforce Road, Cambridge, CB3 0WA (United Kingdom); Schlotterer, Oliver [Max-Planck-Institut für Gravitationsphysik Albert-Einstein-Institut,14476 Potsdam (Germany)

    2016-03-14

    Recent progress on scattering amplitudes in super Yang-Mills and superstring theory benefitted from the use of multiparticle superfields. They universally capture tree-level subdiagrams, and their generating series solve the non-linear equations of ten-dimensional super Yang-Mills. We provide simplified recursions for multiparticle superfields and relate them to earlier representations through non-linear gauge transformations of their generating series. Moreover, we discuss the gauge transformations which enforce their Lie symmetries as suggested by the Bern-Carrasco-Johansson duality between color and kinematics. Another gauge transformation due to Harnad and Shnider is shown to streamline the theta-expansion of multiparticle superfields, bypassing the need to use their recursion relations beyond the lowest components. The findings of this work tremendously simplify the component extraction from kinematic factors in pure spinor superspace.

  19. Scalar-tensor linear inflation

    Energy Technology Data Exchange (ETDEWEB)

    Artymowski, Michał [Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków (Poland); Racioppi, Antonio, E-mail: Michal.Artymowski@uj.edu.pl, E-mail: Antonio.Racioppi@kbfi.ee [National Institute of Chemical Physics and Biophysics, Rävala 10, 10143 Tallinn (Estonia)

    2017-04-01

    We investigate two approaches to non-minimally coupled gravity theories which present linear inflation as attractor solution: a) the scalar-tensor theory approach, where we look for a scalar-tensor theory that would restore results of linear inflation in the strong coupling limit for a non-minimal coupling to gravity of the form of f (φ) R /2; b) the particle physics approach, where we motivate the form of the Jordan frame potential by loop corrections to the inflaton field. In both cases the Jordan frame potentials are modifications of the induced gravity inflationary scenario, but instead of the Starobinsky attractor they lead to linear inflation in the strong coupling limit.

  20. Performance Evaluation of Linear (ARMA and Threshold Nonlinear (TAR Time Series Models in Daily River Flow Modeling (Case Study: Upstream Basin Rivers of Zarrineh Roud Dam

    Directory of Open Access Journals (Sweden)

    Farshad Fathian

    2017-01-01

    Full Text Available Introduction: Time series models are generally categorized as a data-driven method or mathematically-based method. These models are known as one of the most important tools in modeling and forecasting of hydrological processes, which are used to design and scientific management of water resources projects. On the other hand, a better understanding of the river flow process is vital for appropriate streamflow modeling and forecasting. One of the main concerns of hydrological time series modeling is whether the hydrologic variable is governed by the linear or nonlinear models through time. Although the linear time series models have been widely applied in hydrology research, there has been some recent increasing interest in the application of nonlinear time series approaches. The threshold autoregressive (TAR method is frequently applied in modeling the mean (first order moment of financial and economic time series. Thise type of the model has not received considerable attention yet from the hydrological community. The main purposes of this paper are to analyze and to discuss stochastic modeling of daily river flow time series of the study area using linear (such as ARMA: autoregressive integrated moving average and non-linear (such as two- and three- regime TAR models. Material and Methods: The study area has constituted itself of four sub-basins namely, Saghez Chai, Jighato Chai, Khorkhoreh Chai and Sarogh Chai from west to east, respectively, which discharge water into the Zarrineh Roud dam reservoir. River flow time series of 6 hydro-gauge stations located on upstream basin rivers of Zarrineh Roud dam (located in the southern part of Urmia Lake basin were considered to model purposes. All the data series used here to start from January 1, 1997, and ends until December 31, 2011. In this study, the daily river flow data from January 01 1997 to December 31 2009 (13 years were chosen for calibration and data for January 01 2010 to December 31 2011

  1. Simplified non-linear time-history analysis based on the Theory of Plasticity

    DEFF Research Database (Denmark)

    Costa, Joao Domingues

    2005-01-01

    This paper aims at giving a contribution to the problem of developing simplified non-linear time-history (NLTH) analysis of structures which dynamical response is mainly governed by plastic deformations, able to provide designers with sufficiently accurate results. The method to be presented...... is based on the Theory of Plasticity. Firstly, the formulation and the computational procedure to perform time-history analysis of a rigid-plastic single degree of freedom (SDOF) system are presented. The necessary conditions for the method to incorporate pinching as well as strength degradation...

  2. A modification to linearized theory for prediction of pressure loadings on lifting surfaces at high supersonic Mach numbers and large angles of attack

    Science.gov (United States)

    Carlson, H. W.

    1979-01-01

    A new linearized-theory pressure-coefficient formulation was studied. The new formulation is intended to provide more accurate estimates of detailed pressure loadings for improved stability analysis and for analysis of critical structural design conditions. The approach is based on the use of oblique-shock and Prandtl-Meyer expansion relationships for accurate representation of the variation of pressures with surface slopes in two-dimensional flow and linearized-theory perturbation velocities for evaluation of local three-dimensional aerodynamic interference effects. The applicability and limitations of the modification to linearized theory are illustrated through comparisons with experimental pressure distributions for delta wings covering a Mach number range from 1.45 to 4.60 and angles of attack from 0 to 25 degrees.

  3. The fully relativistic foundation of linear transfer theory in electron optics based on the Dirac equation

    NARCIS (Netherlands)

    Ferwerda, H.A.; Hoenders, B.J.; Slump, C.H.

    The fully relativistic quantum mechanical treatment of paraxial electron-optical image formation initiated in the previous paper (this issue) is worked out and leads to a rigorous foundation of the linear transfer theory. Moreover, the status of the relativistic scaling laws for mass and wavelength,

  4. Visuo-manual tracking: does intermittent control with aperiodic sampling explain linear power and non-linear remnant without sensorimotor noise?

    Science.gov (United States)

    Gollee, Henrik; Gawthrop, Peter J; Lakie, Martin; Loram, Ian D

    2017-11-01

    A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non-linearly related to the input, attributed to sensorimotor noise. Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200-500 ms periods of irresponsiveness to sensory input making the control process intrinsically non-linear. This evidence calls for re-examination of the extent to which random sensorimotor noise is required to explain the non-linear remnant. This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds. Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non-linear remnant resulting from random sensorimotor noise from multiple sources, and non-linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non-linear remnant using noise or non-linear transformations? (ii) Can non-linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed

  5. Application of supersonic linear theory and hypersonic impact methods to three nonslender hypersonic airplane concepts at Mach numbers from 1.10 to 2.86

    Science.gov (United States)

    Pittman, J. L.

    1979-01-01

    Aerodynamic predictions from supersonic linear theory and hypersonic impact theory were compared with experimental data for three hypersonic research airplane concepts over a Mach number range from 1.10 to 2.86. The linear theory gave good lift prediction and fair to good pitching-moment prediction over the Mach number (M) range. The tangent-cone theory predictions were good for lift and fair to good for pitching moment for M more than or equal to 2.0. The combined tangent-cone theory predictions were good for lift and fair to good for pitching moment for M more than or equal to 2.0. The combined tangent-cone/tangent-wedge method gave the least accurate prediction of lift and pitching moment. The zero-lift drag was overestimated, especially for M less than 2.0. The linear theory drag prediction was generally poor, with areas of good agreement only for M less than or equal to 1.2. For M more than or equal to 2.), the tangent-cone method predicted the zero-lift drag most accurately.

  6. Connection between perturbation theory, projection-operator techniques, and statistical linearization for nonlinear systems

    International Nuclear Information System (INIS)

    Budgor, A.B.; West, B.J.

    1978-01-01

    We employ the equivalence between Zwanzig's projection-operator formalism and perturbation theory to demonstrate that the approximate-solution technique of statistical linearization for nonlinear stochastic differential equations corresponds to the lowest-order β truncation in both the consolidated perturbation expansions and in the ''mass operator'' of a renormalized Green's function equation. Other consolidated equations can be obtained by selectively modifying this mass operator. We particularize the results of this paper to the Duffing anharmonic oscillator equation

  7. Absorption line profiles in a moving atmosphere - A single scattering linear perturbation theory

    Science.gov (United States)

    Hays, P. B.; Abreu, V. J.

    1989-01-01

    An integral equation is derived which linearly relates Doppler perturbations in the spectrum of atmospheric absorption features to the wind system which creates them. The perturbation theory is developed using a single scattering model, which is validated against a multiple scattering calculation. The nature and basic properties of the kernels in the integral equation are examined. It is concluded that the kernels are well behaved and that wind velocity profiles can be recovered using standard inversion techniques.

  8. Reaction πN → ππN near threshold

    Energy Technology Data Exchange (ETDEWEB)

    Frlez, Emil [Univ. of Virginia, Charlottesville, VA (United States)

    1993-11-01

    The LAMPF E1179 experiment used the π0 spectrometer and an array of charged particle range counters to detect and record π+π0, π0p, and π+π0p coincidences following the reaction π+p → π0π+p near threshold. The total cross sections for single pion production were measured at the incident pion kinetic energies 190, 200, 220, 240, and 260 MeV. Absolute normalizations were fixed by measuring π+p elastic scattering at 260 MeV. A detailed analysis of the π0 detection efficiency was performed using cosmic ray calibrations and pion single charge exchange measurements with a 30 MeV π- beam. All published data on πN → ππN, including our results, are simultaneously fitted to yield a common chiral symmetry breaking parameter ξ =-0.25±0.10. The threshold matrix element |α00π+p)| determined by linear extrapolation yields the value of the s-wave isospin-2 ππ scattering length α$2\\atop{0}$(ππ) = -0.041±0.003 m$-1\\atop{π}$-1, within the framework of soft-pion theory.

  9. A Stream Function Theory Based Calculation of Wave Kinematics for Very Steep Waves Using a Novel Non-linear Stretching Technique

    DEFF Research Database (Denmark)

    Stroescu, Ionut Emanuel; Sørensen, Lasse; Frigaard, Peter Bak

    2016-01-01

    A non-linear stretching method was implemented for stream function theory to solve wave kinematics for physical conditions close to breaking waves in shallow waters, with wave heights limited by the water depth. The non-linear stretching method proves itself robust, efficient and fast, showing good...

  10. Non-Markovian linear response theory for quantum open systems and its applications.

    Science.gov (United States)

    Shen, H Z; Li, D X; Yi, X X

    2017-01-01

    The Kubo formula is an equation that expresses the linear response of an observable due to a time-dependent perturbation. It has been extended from closed systems to open systems in recent years under the Markovian approximation, but is barely explored for open systems in non-Markovian regimes. In this paper, we derive a formula for the linear response of an open system to a time-independent external field. This response formula is available for both Markovian and non-Markovian dynamics depending on parameters in the spectral density of the environment. As an illustration of the theory, the Hall conductance of a two-band system subjected to environments is derived and discussed. With the tight-binding model, we point out the Hall conductance changes from Markovian to non-Markovian dynamics by modulating the spectral density of the environment. Our results suggest a way to the controlling of the system response, which has potential applications for quantum statistical mechanics and condensed matter physics.

  11. Effects of programming threshold and maplaw settings on acoustic thresholds and speech discrimination with the MED-EL COMBI 40+ cochlear implant.

    Science.gov (United States)

    Boyd, Paul J

    2006-12-01

    The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical

  12. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Science.gov (United States)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  13. The crack-initiation threshold in ceramic materials subject to elastic/plastic indentation

    International Nuclear Information System (INIS)

    Lankford, J.; Davidson, D.L.

    1979-01-01

    The threshold for indentation cracking is established for a range of ceramic materials, using the techniques of scanning electron microscopy and acoustic emission. It is found that by taking into account indentation plasticity, current theories may be successfully combined to predict threshold indentation loads and crack sizes. Threshold cracking is seen to relate to radial rather than median cracking. (author)

  14. Axiomatic field theory and quantum electrodynamics: the massive case

    International Nuclear Information System (INIS)

    Steinmann, O.

    1975-01-01

    Massive quantum electrodynamics of the electron is formulated as an LSZ theory of the electromagnetic field F(μν) and the electron-positron fields PSI. The interaction is introduced with the help of mathematically well defined subsidiary conditions. These are: 1) gauge invariance of the first kind, assumed to be generated by a conserved current j(μ); 2) the homogeneous Maxwell equations and a massive version of the inhomogeneous Maxwell equations; 3) a minimality condition concerning the high momentum behaviour of the theory. The inhomogeneous Maxwell equation is a linear differential equation connecting Fsub(μν) with the current Jsub(μ). No Lagrangian, no non-linear field equations, and no explicit expression of Jsub(μ) in terms of PSI, anti-PSI are needed. It is shown in perturbation theory that the proposed conditions fix the physically relevant (i.e. observable) quantities of the theory uniquely

  15. Genotoxic thresholds, DNA repair, and susceptibility in human populations

    International Nuclear Information System (INIS)

    Jenkins, Gareth J.S.; Zair, Zoulikha; Johnson, George E.; Doak, Shareen H.

    2010-01-01

    It has been long assumed that DNA damage is induced in a linear manner with respect to the dose of a direct acting genotoxin. Thus, it is implied that direct acting genotoxic agents induce DNA damage at even the lowest of concentrations and that no 'safe' dose range exists. The linear (non-threshold) paradigm has led to the one-hit model being developed. This 'one hit' scenario can be interpreted such that a single DNA damaging event in a cell has the capability to induce a single point mutation in that cell which could (if positioned in a key growth controlling gene) lead to increased proliferation, leading ultimately to the formation of a tumour. There are many groups (including our own) who, for a decade or more, have argued, that low dose exposures to direct acting genotoxins may be tolerated by cells through homeostatic mechanisms such as DNA repair. This argument stems from the existence of evolutionary adaptive mechanisms that allow organisms to adapt to low levels of exogenous sources of genotoxins. We have been particularly interested in the genotoxic effects of known mutagens at low dose exposures in human cells and have identified for the first time, in vitro genotoxic thresholds for several mutagenic alkylating agents (Doak et al., 2007). Our working hypothesis is that DNA repair is primarily responsible for these thresholded effects at low doses by removing low levels of DNA damage but becoming saturated at higher doses. We are currently assessing the roles of base excision repair (BER) and methylguanine-DNA methyltransferase (MGMT) for roles in the identified thresholds (Doak et al., 2008). This research area is currently important as it assesses whether 'safe' exposure levels to mutagenic chemicals can exist and allows risk assessment using appropriate safety factors to define such exposure levels. Given human variation, the mechanistic basis for genotoxic thresholds (e.g. DNA repair) has to be well defined in order that susceptible individuals are

  16. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    Energy Technology Data Exchange (ETDEWEB)

    Pilipchuk, L. A., E-mail: pilipchik@bsu.by [Belarussian State University, 220030 Minsk, 4, Nezavisimosti avenue, Republic of Belarus (Belarus); Pilipchuk, A. S., E-mail: an.pilipchuk@gmail.com [The Natural Resources and Environmental Protestion Ministry of the Republic of Belarus, 220004 Minsk, 10 Kollektornaya Street, Republic of Belarus (Belarus)

    2015-11-30

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.

  17. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    International Nuclear Information System (INIS)

    Pilipchuk, L. A.; Pilipchuk, A. S.

    2015-01-01

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure

  18. Generalized multivariate Fokker-Planck equations derived from kinetic transport theory and linear nonequilibrium thermodynamics

    International Nuclear Information System (INIS)

    Frank, T.D.

    2002-01-01

    We study many particle systems in the context of mean field forces, concentration-dependent diffusion coefficients, generalized equilibrium distributions, and quantum statistics. Using kinetic transport theory and linear nonequilibrium thermodynamics we derive for these systems a generalized multivariate Fokker-Planck equation. It is shown that this Fokker-Planck equation describes relaxation processes, has stationary maximum entropy distributions, can have multiple stationary solutions and stationary solutions that differ from Boltzmann distributions

  19. Regional rainfall thresholds for landslide occurrence using a centenary database

    Science.gov (United States)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia

    2017-04-01

    considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].

  20. The Glare Effect Test and the Impact of Age on Luminosity Thresholds

    Directory of Open Access Journals (Sweden)

    Alessio Facchin

    2017-06-01

    Full Text Available The glare effect (GE is an illusion in which a white region appears self-luminous when surrounded by linearly decreasing luminance ramps. It has been shown that the magnitude of the luminosity effect can be modulated by manipulating the luminance range of the gradients. In the present study we tested the thresholds for the GE on two groups of adults: young (20–30 years old and elderly (60–75 years old. Purpose of our perspective study was to test the possibility of transforming the GE into a test that could easily measure thresholds for luminosity and discomfort glare. The Glare Effect Test (GET consisted in 101 printed cards that differed from each other for the range of luminance ramps. Participants were assessed with GET and a battery of visual tests: visual acuity, contrast sensitivity, illusion of length perception, and Ishihara test. Specifically in the GET, participants were required to classify cards on the basis of two reference cards (solid black-no gradient; full range black to white gradient. PSEs of the GE show no correlation with the other visual tests, revealing a divergent validity. A significant difference between young and elderly was found: contrary to our original expectations, luminosity thresholds of GE for elderly were higher than those for young, suggesting a non-direct relationship between luminosity perception and discomfort glare.

  1. Wavelet-based linear-response time-dependent density-functional theory

    Science.gov (United States)

    Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.; Philouze, Christian; Balakirev, Maxim Y.

    2012-06-01

    Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BIGDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program DEMON2K for the calculation of electronic absorption spectra of N2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BIGDFT than for DEMON2K. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BIGDFT, while all virtual orbitals are included in TD-DFT calculations in DEMON2K. As a reality check, we report the X-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidazo[1, 2-a]pyridin-3-amine.

  2. Extreme Value Theory Approach to Simultaneous Monitoring and Thresholding of Multiple Risk Indicators

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Li, J.; Liu, R.Y.

    2006-01-01

    Risk assessments often encounter extreme settings with very few or no occurrences in reality.Inferences about risk indicators in such settings face the problem of insufficient data.Extreme value theory is particularly well suited for handling this type of problems.This paper uses a multivariate

  3. Psychophysical thresholds of face visibility during infancy

    DEFF Research Database (Denmark)

    Gelskov, Sofie; Kouider, Sid

    2010-01-01

    The ability to detect and focus on faces is a fundamental prerequisite for developing social skills. But how well can infants detect faces? Here, we address this question by studying the minimum duration at which faces must appear to trigger a behavioral response in infants. We used a preferential...... looking method in conjunction with masking and brief presentations (300 ms and below) to establish the temporal thresholds of visibility at different stages of development. We found that 5 and 10 month-old infants have remarkably similar visibility thresholds about three times higher than those of adults....... By contrast, 15 month-olds not only revealed adult-like thresholds, but also improved their performance through memory-based strategies. Our results imply that the development of face visibility follows a non-linear course and is determined by a radical improvement occurring between 10 and 15 months....

  4. Simulation study on single event burnout in linear doping buffer layer engineered power VDMOSFET

    Science.gov (United States)

    Yunpeng, Jia; Hongyuan, Su; Rui, Jin; Dongqing, Hu; Yu, Wu

    2016-02-01

    The addition of a buffer layer can improve the device's secondary breakdown voltage, thus, improving the single event burnout (SEB) threshold voltage. In this paper, an N type linear doping buffer layer is proposed. According to quasi-stationary avalanche simulation and heavy ion beam simulation, the results show that an optimized linear doping buffer layer is critical. As SEB is induced by heavy ions impacting, the electric field of an optimized linear doping buffer device is much lower than that with an optimized constant doping buffer layer at a given buffer layer thickness and the same biasing voltages. Secondary breakdown voltage and the parasitic bipolar turn-on current are much higher than those with the optimized constant doping buffer layer. So the linear buffer layer is more advantageous to improving the device's SEB performance. Project supported by the National Natural Science Foundation of China (No. 61176071), the Doctoral Fund of Ministry of Education of China (No. 20111103120016), and the Science and Technology Program of State Grid Corporation of China (No. SGRI-WD-71-13-006).

  5. Recent results on the linearity of the dose-response relationship for radiation-induced mutations in human cells by low dose levels

    International Nuclear Information System (INIS)

    Traut, H.

    1987-01-01

    Five studies made by various authors in the last years are discussed, which are significant in that the response of human cells to low-dose irradiation is determined directly and not by extrapolation, and which also provide information on the mutagenic effects of low radiation doses. The results of these studies do not indicate any other than a linear response for induction of mutations by low-dose irradiation, nor are there any reasons observable for assuming the existence of a threshold dose. It is very likely therefore that cancer initiation at the low dose level also is characterized by a linear relationship. Although threshold dose levels cannot generally be excluded, and maybe are only too low to be detected by experiment, there is no plausible biophysical argument for assuming the existence of such microdose threshold. (orig./MG) [de

  6. Existence and control of Go/No-Go decision transition threshold in the striatum.

    Directory of Open Access Journals (Sweden)

    Jyotika Bahuguna

    2015-04-01

    Full Text Available A typical Go/No-Go decision is suggested to be implemented in the brain via the activation of the direct or indirect pathway in the basal ganglia. Medium spiny neurons (MSNs in the striatum, receiving input from cortex and projecting to the direct and indirect pathways express D1 and D2 type dopamine receptors, respectively. Recently, it has become clear that the two types of MSNs markedly differ in their mutual and recurrent connectivities as well as feedforward inhibition from FSIs. Therefore, to understand striatal function in action selection, it is of key importance to identify the role of the distinct connectivities within and between the two types of MSNs on the balance of their activity. Here, we used both a reduced firing rate model and numerical simulations of a spiking network model of the striatum to analyze the dynamic balance of spiking activities in D1 and D2 MSNs. We show that the asymmetric connectivity of the two types of MSNs renders the striatum into a threshold device, indicating the state of cortical input rates and correlations by the relative activity rates of D1 and D2 MSNs. Next, we describe how this striatal threshold can be effectively modulated by the activity of fast spiking interneurons, by the dopamine level, and by the activity of the GPe via pallidostriatal backprojections. We show that multiple mechanisms exist in the basal ganglia for biasing striatal output in favour of either the `Go' or the `No-Go' pathway. This new understanding of striatal network dynamics provides novel insights into the putative role of the striatum in various behavioral deficits in patients with Parkinson's disease, including increased reaction times, L-Dopa-induced dyskinesia, and deep brain stimulation-induced impulsivity.

  7. Existence and control of Go/No-Go decision transition threshold in the striatum.

    Science.gov (United States)

    Bahuguna, Jyotika; Aertsen, Ad; Kumar, Arvind

    2015-04-01

    A typical Go/No-Go decision is suggested to be implemented in the brain via the activation of the direct or indirect pathway in the basal ganglia. Medium spiny neurons (MSNs) in the striatum, receiving input from cortex and projecting to the direct and indirect pathways express D1 and D2 type dopamine receptors, respectively. Recently, it has become clear that the two types of MSNs markedly differ in their mutual and recurrent connectivities as well as feedforward inhibition from FSIs. Therefore, to understand striatal function in action selection, it is of key importance to identify the role of the distinct connectivities within and between the two types of MSNs on the balance of their activity. Here, we used both a reduced firing rate model and numerical simulations of a spiking network model of the striatum to analyze the dynamic balance of spiking activities in D1 and D2 MSNs. We show that the asymmetric connectivity of the two types of MSNs renders the striatum into a threshold device, indicating the state of cortical input rates and correlations by the relative activity rates of D1 and D2 MSNs. Next, we describe how this striatal threshold can be effectively modulated by the activity of fast spiking interneurons, by the dopamine level, and by the activity of the GPe via pallidostriatal backprojections. We show that multiple mechanisms exist in the basal ganglia for biasing striatal output in favour of either the `Go' or the `No-Go' pathway. This new understanding of striatal network dynamics provides novel insights into the putative role of the striatum in various behavioral deficits in patients with Parkinson's disease, including increased reaction times, L-Dopa-induced dyskinesia, and deep brain stimulation-induced impulsivity.

  8. No evidence for a critical salinity threshold for growth and reproduction in the freshwater snail Physa acuta.

    Science.gov (United States)

    Kefford, Ben J; Nugegoda, Dayanthi

    2005-04-01

    The growth and reproduction of the freshwater snail Physa acuta (Gastropoda: Physidae) were measured at various salinity levels (growth: distilled water, 50, 100, 500, 1000 and 5000 microS/cm; reproduction: deionized water, 100, 500, 1000 and 3000 microS/cm) established using the artificial sea salt, Ocean Nature. This was done to examine the assumption that there is no direct effect of salinity on freshwater animals until a threshold, beyond which sub-lethal effects, such as reduction in growth and reproduction, will occur. Growth of P. acuta was maximal in terms of live and dry mass at salinity levels 500-1000 microS/cm. The number of eggs produced per snail per day was maximal between 100 and 1000 microS/cm. Results show that rather than a threshold response to salinity, small rises in salinity (from low levels) can produce increased growth and reproduction until a maximum is reached. Beyond this salinity, further increases result in a decrease in growth and reproduction. Studies on the growth of freshwater invertebrates and fish have generally shown a similar lack of a threshold response. The implications for assessing the effects of salinisation on freshwater organisms need to be further considered.

  9. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    Science.gov (United States)

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  10. Regional rainfall thresholds for landslide occurrence using a centenary database

    Science.gov (United States)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia

    2018-04-01

    This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.

  11. Linearization instability for generic gravity in AdS spacetime

    Science.gov (United States)

    Altas, Emel; Tekin, Bayram

    2018-01-01

    In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.

  12. Topological characterizations of S-Linearity

    Directory of Open Access Journals (Sweden)

    Carfi', David

    2007-10-01

    Full Text Available We give several characterizations of basic concepts of S-linear algebra in terms of weak duality on topological vector spaces. On the way, some classic results of Functional Analysis are reinterpreted in terms of S-linear algebra, by an application-oriented fashion. The results are required in the S-linear algebra formulation of infinite dimensional Decision Theory and in the study of abstract evolution equations in economical and physical Theories.

  13. Small-threshold behaviour of two-loop self-energy diagrams: two-particle thresholds

    International Nuclear Information System (INIS)

    Berends, F.A.; Davydychev, A.I.; Moskovskij Gosudarstvennyj Univ., Moscow; Smirnov, V.A.; Moskovskij Gosudarstvennyj Univ., Moscow

    1996-01-01

    The behaviour of two-loop two-point diagrams at non-zero thresholds corresponding to two-particle cuts is analyzed. The masses involved in a cut and the external momentum are assumed to be small as compared to some of the other masses of the diagram. By employing general formulae of asymptotic expansions of Feynman diagrams in momenta and masses, we construct an algorithm to derive analytic approximations to the diagrams. In such a way, we calculate several first coefficients of the expansion. Since no conditions on relative values of the small masses and the external momentum are imposed, the threshold irregularities are described analytically. Numerical examples, using diagrams occurring in the standard model, illustrate the convergence of the expansion below the first large threshold. (orig.)

  14. Theoretical study of near-threshold electron-molecule scattering

    International Nuclear Information System (INIS)

    Morrison, M.A.

    1989-01-01

    We have been engaged in carrying out a foundation study on problems pertaining to near-threshold nuclear excitations in e-H 2 scattering. The primary goals of this study are: to investigate the severity and nature of the anticipated breakdown of the adiabatic-nuclei (AN) approximation, first for rotation only (in the rigid-rotator approximation), and then for vibration; to determine a data base of accurate ab initio cross sections for this important system; to implement and test accurate, computationally-tractable model potentials for exchange and polarization effects; and to begin the exploration of alternative scattering theories for near-threshold collisions. This study has provided a well-defined theoretical context for our future investigations. Second, it has enabled us to identify and quantify several serious problems in the theory of near-threshold electron-molecule scattering that demand attention. And finally, it has led to the development of some of the theoretical and computational apparatus that will form the foundation of future work. In this report, we shall review our progress to date, emphasizing work completed during the current contract year. 17 refs., 5 figs., 1 tab

  15. Quantum no-scale regimes in string theory

    Science.gov (United States)

    Coudarchet, Thibaut; Fleming, Claude; Partouche, Hervé

    2018-05-01

    We show that in generic no-scale models in string theory, the flat, expanding cosmological evolutions found at the quantum level can be attracted to a "quantum no-scale regime", where the no-scale structure is restored asymptotically. In this regime, the quantum effective potential is dominated by the classical kinetic energies of the no-scale modulus and dilaton. We find that this natural preservation of the classical no-scale structure at the quantum level occurs when the initial conditions of the evolutions sit in a subcritical region of their space. On the contrary, supercritical initial conditions yield solutions that have no analogue at the classical level. The associated intrinsically quantum universes are sentenced to collapse and their histories last finite cosmic times. Our analysis is done at 1-loop, in perturbative heterotic string compactified on tori, with spontaneous supersymmetry breaking implemented by a stringy version of the Scherk-Schwarz mechanism.

  16. Synchronization of low- and high-threshold motor units.

    Science.gov (United States)

    Defreitas, Jason M; Beck, Travis W; Ye, Xin; Stock, Matt S

    2014-04-01

    We examined the degree of synchronization for both low- and high-threshold motor unit (MU) pairs at high force levels. MU spike trains were recorded from the quadriceps during high-force isometric leg extensions. Short-term synchronization (between -6 and 6 ms) was calculated for every unique MU pair for each contraction. At high force levels, earlier recruited motor unit pairs (low-threshold) demonstrated relatively low levels of short-term synchronization (approximately 7.3% extra firings than would have been expected by chance). However, the magnitude of synchronization increased significantly and linearly with mean recruitment threshold (reaching 22.1% extra firings for motor unit pairs recruited above 70% MVC). Three potential mechanisms that could explain the observed differences in synchronization across motor unit types are proposed and discussed. Copyright © 2013 Wiley Periodicals, Inc.

  17. Einstein’s quadrupole formula from the kinetic-conformal Hořava theory

    Science.gov (United States)

    Bellorín, Jorge; Restuccia, Alvaro

    We analyze the radiative and nonradiative linearized variables in a gravity theory within the family of the nonprojectable Hořava theories, the Hořava theory at the kinetic-conformal point. There is no extra mode in this formulation, the theory shares the same number of degrees of freedom with general relativity. The large-distance effective action, which is the one we consider, can be given in a generally-covariant form under asymptotically flat boundary conditions, the Einstein-aether theory under the condition of hypersurface orthogonality on the aether vector. In the linearized theory, we find that only the transverse-traceless tensorial modes obey a sourced wave equation, as in general relativity. The rest of variables are nonradiative. The result is gauge-independent at the level of the linearized theory. For the case of a weak source, we find that the leading mode in the far zone is exactly Einstein’s quadrupole formula of general relativity, if some coupling constants are properly identified. There are no monopoles nor dipoles in this formulation, in distinction to the nonprojectable Horava theory outside the kinetic-conformal point. We also discuss some constraints on the theory arising from the observational bounds on Lorentz-violating theories.

  18. Linear and non-linear optics of condensed matter

    International Nuclear Information System (INIS)

    McLean, T.P.

    1977-01-01

    Part I - Linear optics: 1. General introduction. 2. Frequency dependence of epsilon(ω, k vector). 3. Wave-vector dependence of epsilon(ω, k vector). 4. Tensor character of epsilon(ω, k vector). Part II - Non-linear optics: 5. Introduction. 6. A classical theory of non-linear response in one dimension. 7. The generalization to three dimensions. 8. General properties of the polarizability tensors. 9. The phase-matching condition. 10. Propagation in a non-linear dielectric. 11. Second harmonic generation. 12. Coupling of three waves. 13. Materials and their non-linearities. 14. Processes involving energy exchange with the medium. 15. Two-photon absorption. 16. Stimulated Raman effect. 17. Electro-optic effects. 18. Limitations of the approach presented here. (author)

  19. DNA repair by MGMT, but not AAG, causes a threshold in alkylation-induced colorectal carcinogenesis.

    Science.gov (United States)

    Fahrer, Jörg; Frisch, Janina; Nagel, Georg; Kraus, Alexander; Dörsam, Bastian; Thomas, Adam D; Reißig, Sonja; Waisman, Ari; Kaina, Bernd

    2015-10-01

    Epidemiological studies indicate that N-nitroso compounds (NOC) are causally linked to colorectal cancer (CRC). NOC induce DNA alkylations, including O (6)-methylguanine (O (6)-MeG) and N-methylated purines, which are repaired by O (6)-MeG-DNA methyltransferase (MGMT) and N-alkyladenine-DNA glycosylase (AAG)-initiated base excision repair, respectively. In view of recent evidence of nonlinear mutagenicity for NOC-like compounds, the question arises as to the existence of threshold doses in CRC formation. Here, we set out to determine the impact of DNA repair on the dose-response of alkylation-induced CRC. DNA repair proficient (WT) and deficient (Mgmt (-/-), Aag (-/-) and Mgmt (-/-)/Aag (-/-)) mice were treated with azoxymethane (AOM) and dextran sodium sulfate to trigger CRC. Tumors were quantified by non-invasive mini-endoscopy. A non-linear increase in CRC formation was observed in WT and Aag (-/-) mice. In contrast, a linear dose-dependent increase in tumor frequency was found in Mgmt (-/-) and Mgmt (-/-)/Aag (-/-) mice. The data were corroborated by hockey stick modeling, yielding similar carcinogenic thresholds for WT and Aag (-/-) and no threshold for MGMT lacking mice. O (6)-MeG levels and depletion of MGMT correlated well with the observed dose-response in CRC formation. AOM induced dose-dependently DNA double-strand breaks in colon crypts including Lgr5-positive colon stem cells, which coincided with ATR-Chk1-p53 signaling. Intriguingly, Mgmt (-/-) mice displayed significantly enhanced levels of γ-H2AX, suggesting the usefulness of γ-H2AX as an early genotoxicity marker in the colorectum. This study demonstrates for the first time a non-linear dose-response for alkylation-induced colorectal carcinogenesis and reveals DNA repair by MGMT, but not AAG, as a key node in determining a carcinogenic threshold. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. A no-hair theorem for stars in Horndeski theories

    Energy Technology Data Exchange (ETDEWEB)

    Lehébel, A.; Babichev, E.; Charmousis, C., E-mail: antoine.lehebel@th.u-psud.fr, E-mail: eugeny.babichev@th.u-psud.fr, E-mail: christos.charmousis@th.u-psud.fr [Laboratoire de Physique Théorique, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay (France)

    2017-07-01

    We consider a generic scalar-tensor theory involving a shift-symmetric scalar field and minimally coupled matter fields. We prove that the Noether current associated with shift-symmetry vanishes in regular, spherically symmetric and static spacetimes. We use this fact to prove the absence of scalar hair for spherically symmetric and static stars in Horndeski and beyond theories. We carefully detail the validity of this no-hair theorem.

  1. Near-threshold deuteron photodisintegration: An indirect determination of the Gerasimov-Drell-Hearn sum rule and forward spin polarizability (γ0) for the deuteron at low energies

    International Nuclear Information System (INIS)

    Ahmed, M. W.; Blackston, M. A.; Perdue, B. A.; Tornow, W.; Weller, H. R.; Norum, B.; Sawatzky, B.; Prior, R. M.; Spraker, M. C.

    2008-01-01

    It is shown that a measurement of the analyzing power obtained with linearly polarized γ-rays and an unpolarized target can provide an indirect determination of two physical quantities. These are the Gerasimov-Drell-Hearn (GDH) sum rule integrand for the deuteron and the sum rule integrand for the forward spin polarizability (γ 0 ) near photodisintegration threshold. An analysis of data for the d(γ-vector,n)p reaction and other experiments is presented. A fit to the world data analyzed in this manner gives a GDH integral value of -603±43μb between the photodisintegration threshold and 6 MeV. This result is the first confirmation of the large contribution of the 1 S 0 (M1) transition predicted for the deuteron near photodisintegration threshold. In addition, a sum rule value of 3.75±0.18 fm 4 for γ 0 is obtained between photodisintegration threshold and 6 MeV. This is a first indirect confirmation of the leading-order effective field theory prediction for the forward spin-polarizability of the deuteron

  2. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  3. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  4. A primer on Hilbert space theory linear spaces, topological spaces, metric spaces, normed spaces, and topological groups

    CERN Document Server

    Alabiso, Carlo

    2015-01-01

    This book is an introduction to the theory of Hilbert space, a fundamental tool for non-relativistic quantum mechanics. Linear, topological, metric, and normed spaces are all addressed in detail, in a rigorous but reader-friendly fashion. The rationale for an introduction to the theory of Hilbert space, rather than a detailed study of Hilbert space theory itself, resides in the very high mathematical difficulty of even the simplest physical case. Within an ordinary graduate course in physics there is insufficient time to cover the theory of Hilbert spaces and operators, as well as distribution theory, with sufficient mathematical rigor. Compromises must be found between full rigor and practical use of the instruments. The book is based on the author's lessons on functional analysis for graduate students in physics. It will equip the reader to approach Hilbert space and, subsequently, rigged Hilbert space, with a more practical attitude. With respect to the original lectures, the mathematical flavor in all sub...

  5. Phi photoproduction near threshold with Okubo-Zweig-Iizuka evading phi NN interactions

    CERN Document Server

    William, R A

    1998-01-01

    Existing intermediate and high energy phi-photoproduction data is consistent with purely diffractive production (i.e., Pomeron exchange). However, near threshold (1.574 GeV K sup + K sup - decay angular distribution. We stress the importance of measurements with linearly polarized photons near the phi threshold to separate natural and unnatural parity exchange mechanisms. Approved and planned phi photoproduction and electroproduction experiments at Jefferson Lab will help establish the relative dynamical contributions near threshold and clarify outstanding theoretical issues related to apparent Okubo-Zweig-Iizuka violations.

  6. Stimulated Brillouin scattering threshold in fiber amplifiers

    International Nuclear Information System (INIS)

    Liang Liping; Chang Liping

    2011-01-01

    Based on the wave coupling theory and the evolution model of the critical pump power (or Brillouin threshold) for stimulated Brillouin scattering (SBS) in double-clad fiber amplifiers, the influence of signal bandwidth, fiber-core diameter and amplifier gain on SBS threshold is simulated theoretically. And experimental measurements of SBS are presented in ytterbium-doped double-clad fiber amplifiers with single-frequency hundred nanosecond pulse amplification. Under different input signal pulses, the forward amplified pulse distortion is observed when the pulse energy is up to 660 nJ and the peak power is up to 3.3 W in the pulse amplification with pulse duration of 200 ns and repetition rate of 1 Hz. And the backward SBS narrow pulse appears. The pulse peak power equals to SBS threshold. Good agreement is shown between the modeled and experimental data. (authors)

  7. Cosmological large-scale structures beyond linear theory in modified gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bernardeau, Francis; Brax, Philippe, E-mail: francis.bernardeau@cea.fr, E-mail: philippe.brax@cea.fr [CEA, Institut de Physique Théorique, 91191 Gif-sur-Yvette Cédex (France)

    2011-06-01

    We consider the effect of modified gravity on the growth of large-scale structures at second order in perturbation theory. We show that modified gravity models changing the linear growth rate of fluctuations are also bound to change, although mildly, the mode coupling amplitude in the density and reduced velocity fields. We present explicit formulae which describe this effect. We then focus on models of modified gravity involving a scalar field coupled to matter, in particular chameleons and dilatons, where it is shown that there exists a transition scale around which the existence of an extra scalar degree of freedom induces significant changes in the coupling properties of the cosmic fields. We obtain the amplitude of this effect for realistic dilaton models at the tree-order level for the bispectrum, finding them to be comparable in amplitude to those obtained in the DGP and f(R) models.

  8. Threshold stoichiometry for beam induced nitrogen depletion of SiN

    International Nuclear Information System (INIS)

    Timmers, H.; Weijers, T.D.M.; Elliman, R.G.; Uribasterra, J.; Whitlow, H.J.; Sarwe, E.-L.

    2002-01-01

    Measurements of the stoichiometry of silicon nitride films as a function of the number of incident ions using heavy ion elastic recoil detection (ERD) show that beam-induced nitrogen depletion depends on the projectile species, the beam energy, and the initial stoichiometry. A threshold stoichiometry exists in the range 1.3>N/Si≥1, below which the films are stable against nitrogen depletion. Above this threshold, depletion is essentially linear with incident fluence. The depletion rate correlates non-linearly with the electronic energy loss of the projectile ion in the film. Sufficiently long exposure of nitrogen-rich films renders the mechanism, which prevents depletion of nitrogen-poor films, ineffective. Compromising depth-resolution, nitrogen depletion from SiN films during ERD analysis can be reduced significantly by using projectile beams with low atomic numbers

  9. Chaos theory for clinical manifestations in multiple sclerosis.

    Science.gov (United States)

    Akaishi, Tetsuya; Takahashi, Toshiyuki; Nakashima, Ichiro

    2018-06-01

    Multiple sclerosis (MS) is a demyelinating disease which characteristically shows repeated relapses and remissions irregularly in the central nervous system. At present, the pathological mechanism of MS is unknown and we do not have any theories or mathematical models to explain its disseminated patterns in time and space. In this paper, we present a new theoretical model from a viewpoint of complex system with chaos model to reproduce and explain the non-linear clinical and pathological manifestations in MS. First, we adopted a discrete logistic equation with non-linear dynamics to prepare a scalar quantity for the strength of pathogenic factor at a specific location of the central nervous system at a specific time to reflect the negative feedback in immunity. Then, we set distinct minimum thresholds in the above-mentioned scalar quantity for demyelination possibly causing clinical relapses and for cerebral atrophy. With this simple model, we could theoretically reproduce all the subtypes of relapsing-remitting MS, primary progressive MS, and secondary progressive MS. With the sensitivity to initial conditions and sensitivity to minute change in parameters of the chaos theory, we could also reproduce the spatial dissemination. Such chaotic behavior could be reproduced with other similar upward-convex functions with appropriate set of initial conditions and parameters. In conclusion, by applying chaos theory to the three-dimensional scalar field of the central nervous system, we can reproduce the non-linear outcome of the clinical course and explain the unsolved disseminations in time and space of the MS patients. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Comparison of Classical and Robust Estimates of Threshold Auto-regression Parameters

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2017-01-01

    Full Text Available The study object is the first-order threshold auto-regression model with a single zero-located threshold. The model describes a stochastic temporal series with discrete time by means of a piecewise linear equation consisting of two linear classical first-order autoregressive equations. One of these equations is used to calculate a running value of the temporal series. A control variable that determines the choice between these two equations is the sign of the previous value of the same series.The first-order threshold autoregressive model with a single threshold depends on two real parameters that coincide with the coefficients of the piecewise linear threshold equation. These parameters are assumed to be unknown. The paper studies an estimate of the least squares, an estimate the least modules, and the M-estimates of these parameters. The aim of the paper is a comparative study of the accuracy of these estimates for the main probabilistic distributions of the updating process of the threshold autoregressive equation. These probability distributions were normal, contaminated normal, logistic, double-exponential distributions, a Student's distribution with different number of degrees of freedom, and a Cauchy distribution.As a measure of the accuracy of each estimate, was chosen its variance to measure the scattering of the estimate around the estimated parameter. An estimate with smaller variance made from the two estimates was considered to be the best. The variance was estimated by computer simulation. To estimate the smallest modules an iterative weighted least-squares method was used and the M-estimates were done by the method of a deformable polyhedron (the Nelder-Mead method. To calculate the least squares estimate, an explicit analytic expression was used.It turned out that the estimation of least squares is best only with the normal distribution of the updating process. For the logistic distribution and the Student's distribution with the

  11. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    Science.gov (United States)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  12. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de [Max Planck Institut für Chemische Energiekonversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F. [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24014 (United States)

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed

  13. Non linear system become linear system

    Directory of Open Access Journals (Sweden)

    Petre Bucur

    2007-01-01

    Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.

  14. Applicability of linear and non-linear potential flow models on a Wavestar float

    DEFF Research Database (Denmark)

    Bozonnet, Pauline; Dupin, Victor; Tona, Paolino

    2017-01-01

    as a model based on non-linear potential flow theory and weakscatterer hypothesis are successively considered. Simple tests, such as dip tests, decay tests and captive tests enable to highlight the improvements obtained with the introduction of nonlinearities. Float motion under wave actions and without...... control action, limited to small amplitude motion with a single float, is well predicted by the numerical models, including the linear one. Still, float velocity is better predicted by accounting for non-linear hydrostatic and Froude-Krylov forces.......Numerical models based on potential flow theory, including different types of nonlinearities are compared and validated against experimental data for the Wavestar wave energy converter technology. Exact resolution of the rotational motion, non-linear hydrostatic and Froude-Krylov forces as well...

  15. On the theory of the two-photon linear photovoltaic effect in n-GaP

    Energy Technology Data Exchange (ETDEWEB)

    Rasulov, V. R.; Rasulov, R. Ya., E-mail: r-rasulov51@mail.ru [Fergana State University (Uzbekistan)

    2016-02-15

    A quantitative theory of the diagonal (ballistic) and nondiagonal (shift) band index contributions to the two-photon current of the linear photovoltaic effect in a semiconductor with a complex band due to the asymmetry of events of electron scattering at phonons and photons is developed. It is shown that processes caused by the simultaneous absorption of two photons do not contribute to the ballistic photocurrent in n-GaP. This is due to the fact that, in this case, there is no asymmetric distribution of the momentum of electrons excited with photons; this distribution arises upon the sequential absorption of two photons with the involvement of LO phonons. It is demonstrated that the temperature dependence of the shift contribution to the two-photon photocurrent in n-GaP is determined by the temperature dependence of the light-absorption coefficient caused by direct optical transitions of electrons between subbands X{sub 1} and X{sub 3}. It is shown that the spectral dependence of the photocurrent has a feature in the light frequency range ω → Δ/2ℏ, which is related to the hump-like shape of subband X{sub 1} in n-GaP{sup 1} and the root-type singularity of the state density determined as k{sub ω}{sup -1}= (2ℏω–Δ){sup –1/2}, where Δ is the energy gap between subbands X{sub 1} and X{sub 3}. The spectral and temperature dependences of the coefficient of absorption of linearly polarized light in n-GaP are obtained with regard to the cone-shaped lower subband of the conduction band.

  16. Adiabatic theory of Wannier threshold laws and ionization cross sections

    International Nuclear Information System (INIS)

    Macek, J.H.; Ovchinnikov, S.Yu.

    1994-01-01

    The Wannier threshold law for three-particle fragmentation is reviewed. By integrating the Schroedinger equation along a path where the reaction coordinate R is complex, anharmonic corrections to the simple power law are obtained. These corrections are found to be non-analytic in the energy E, in contrast to the expected analytic dependence upon E

  17. Thresholding of auditory cortical representation by background noise

    Science.gov (United States)

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  18. Thresholding of auditory cortical representation by background noise.

    Science.gov (United States)

    Liang, Feixue; Bai, Lin; Tao, Huizhong W; Zhang, Li I; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.

  19. Effects of fatigue on motor unit firing rate versus recruitment threshold relationships.

    Science.gov (United States)

    Stock, Matt S; Beck, Travis W; Defreitas, Jason M

    2012-01-01

    The purpose of this study was to examine the influence of fatigue on the average firing rate versus recruitment threshold relationships for the vastus lateralis (VL) and vastus medialis. Nineteen subjects performed ten maximum voluntary contractions of the dominant leg extensors. Before and after this fatiguing protocol, the subjects performed a trapezoid isometric muscle action of the leg extensors, and bipolar surface electromyographic signals were detected from both muscles. These signals were then decomposed into individual motor unit action potential trains. For each subject and muscle, the relationship between average firing rate and recruitment threshold was examined using linear regression analyses. For the VL, the linear slope coefficients and y-intercepts for these relationships increased and decreased, respectively, after fatigue. For both muscles, many of the motor units decreased their firing rates. With fatigue, recruitment of higher threshold motor units resulted in an increase in slope for the VL. Copyright © 2011 Wiley Periodicals, Inc.

  20. Discovery of the Linear Region of Near Infrared Diffuse Reflectance Spectra Using the Kubelka-Munk Theory

    Directory of Open Access Journals (Sweden)

    Shengyun Dai

    2018-05-01

    Full Text Available Particle size is of great importance for the quantitative model of the NIR diffuse reflectance. In this paper, the effect of sample particle size on the measurement of harpagoside in Radix Scrophulariae powder by near infrared diffuse (NIR reflectance spectroscopy was explored. High-performance liquid chromatography (HPLC was employed as a reference method to construct the quantitative particle size model. Several spectral preprocessing methods were compared, and particle size models obtained by different preprocessing methods for establishing the partial least-squares (PLS models of harpagoside. Data showed that the particle size distribution of 125–150 μm for Radix Scrophulariae exhibited the best prediction ability with Rpre2 = 0.9513, RMSEP = 0.1029 mg·g−1, and RPD = 4.78. For the hybrid granularity calibration model, the particle size distribution of 90–180 μm exhibited the best prediction ability with Rpre2 = 0.8919, RMSEP = 0.1632 mg·g−1, and RPD = 3.09. Furthermore, the Kubelka-Munk theory was used to relate the absorption coefficient k (concentration-dependent and scatter coefficient s (particle size-dependent. The scatter coefficient s was calculated based on the Kubelka-Munk theory to study the changes of s after being mathematically preprocessed. A linear relationship was observed between k/s and absorption A within a certain range and the value for k/s was >4. According to this relationship, the model was more accurately constructed with the particle size distribution of 90–180 μm when s was kept constant or in a small linear region. This region provided a good reference for the linear modeling of diffuse reflectance spectroscopy. To establish a diffuse reflectance NIR model, further accurate assessment should be obtained in advance for a precise linear model.

  1. Summary report of a workshop on establishing cumulative effects thresholds : a suggested approach for establishing cumulative effects thresholds in a Yukon context

    International Nuclear Information System (INIS)

    2003-01-01

    Increasingly, thresholds are being used as a land and cumulative effects assessment and management tool. To assist in the management of wildlife species such as woodland caribou, the Department of Indian and Northern Affairs (DIAND) Environment Directorate, Yukon sponsored a workshop to develop and use cumulative thresholds in the Yukon. The approximately 30 participants reviewed recent initiatives in the Yukon and other jurisdictions. The workshop is expected to help formulate a strategic vision for implementing cumulative effects thresholds in the Yukon. The key to success resides in building relationships with Umbrella Final Agreement (UFA) Boards, the Development Assessment Process (DAP), and the Yukon Environmental and Socio-Economic Assessment Act (YESAA). Broad support is required within an integrated resource management framework. The workshop featured discussions on current science and theory of cumulative effects thresholds. Potential data and implementation issues were also discussed. It was concluded that thresholds are useful and scientifically defensible. The threshold research results obtained in Alberta, British Columbia and the Northwest Territories are applicable to the Yukon. One of the best tools for establishing and tracking thresholds is habitat effectiveness. Effects must be monitored and tracked. Biologists must share their information with decision makers. Interagency coordination and assistance should be facilitated through the establishment of working groups. Regional land use plans should include thresholds. 7 refs.

  2. Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection

    KAUST Repository

    Naveau, Philippe

    2016-04-09

    In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.

  3. Effective-medium theory for nonlinear magneto-optics in magnetic granular alloys: cubic nonlinearity

    International Nuclear Information System (INIS)

    Granovsky, Alexander B.; Kuzmichov, Michail V.; Clerc, J.-P.; Inoue, Mitsuteru

    2003-01-01

    We propose a simple effective-medium approach for calculating the effective dielectric function of a magnetic metal-insulator granular alloy in which there is a weakly nonlinear relation between electric displacement D and electric field E for both constituent materials of the form D i =ε i (0) E i +χ i (3) |E i | 2 E i . We assume that linear ε i (0) and cubic nonlinear χ i (3) dielectric functions are diagonal and linear with magnetization non-diagonal components. For such metal-insulator composite magneto-optical effects depend on a light intensity and the effective cubic dielectric function χ eff (3) can be significantly greater (up to 10 3 times) than that for constituent materials. The calculation scheme is based on the Bergman and Stroud-Hui theory of nonlinear optical properties of granular matter. The giant cubic magneto-optical nonlinearity is found for composites with metallic volume fraction close to the percolation threshold and at a resonance of optical conductivity. It is shown that a composite may exhibit nonlinear magneto-optics even when both constituent materials have no cubic magneto-optical nonlinearity

  4. Linear parameter-varying control for engineering applications

    CERN Document Server

    White, Andrew P; Choi, Jongeun

    2013-01-01

    The objective of this brief is to carefully illustrate a procedure of applying linear parameter-varying (LPV) control to a class of dynamic systems via a systematic synthesis of gain-scheduling controllers with guaranteed stability and performance. The existing LPV control theories rely on the use of either H-infinity or H2 norm to specify the performance of the LPV system.  The challenge that arises with LPV control for engineers is twofold. First, there is no systematic procedure for applying existing LPV control system theory to solve practical engineering problems from modeling to control design. Second, there exists no LPV control synthesis theory to design LPV controllers with hard constraints. For example, physical systems usually have hard constraints on their required performance outputs along with their sensors and actuators. Furthermore, the H-infinity and H2 performance criteria cannot provide hard constraints on system outputs. As a result, engineers in industry could find it difficult to utiliz...

  5. Linear theory for filtering nonlinear multiscale systems with model error.

    Science.gov (United States)

    Berry, Tyrus; Harlim, John

    2014-07-08

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online , as part of a filtering

  6. Microscopic theory of linear and nonlinear terahertz spectroscopy of semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Steiner, Johannes

    2008-12-09

    This Thesis presents a fully microscopic theory to describe terahertz (THz)-induced processes in optically-excited semiconductors. The formation process of excitons and other quasi-particles after optical excitation has been studied in great detail for a variety of conditions. Here, the formation process is not modelled but a realistic initial many-body state is assumed. In particular, the linear THz response is reviewed and it is demonstrated that correlated quasi-particles such as excitons and plasmons can be unambiguously detected via THz spectroscopy. The focus of the investigations, however, is on situations where the optically-excited many-body state is excited by intense THz fields. While weak pulses detect the many-body state, strong THz pulses control and manipulate the quasi-particles in a way that is not accessible via conventional techniques. The nonlinear THz dynamics of exciton populations is especially interesting because similarities and differences to optics with atomic systems can be studied. (orig.)

  7. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  8. Lie algebras and linear differential equations.

    Science.gov (United States)

    Brockett, R. W.; Rahimi, A.

    1972-01-01

    Certain symmetry properties possessed by the solutions of linear differential equations are examined. For this purpose, some basic ideas from the theory of finite dimensional linear systems are used together with the work of Wei and Norman on the use of Lie algebraic methods in differential equation theory.

  9. Teaching Prospect Theory with the "Deal or No Deal" Game Show

    Science.gov (United States)

    Baker, Ardith; Bittner, Teresa; Makrigeorgis, Christos; Johnson, Gloria; Haefner, Joseph

    2010-01-01

    Recent evidence indicates that decision makers are more sensitive to potential losses than gains. Loss aversion psychology has led behavioural economists to look beyond expected utility by developing "prospect theory." We demonstrate this theory using the "Deal or No Deal" game show.

  10. Ecological thresholds: The key to successful enviromental management or an important concept with no practical application?

    Science.gov (United States)

    Groffman, P.M.; Baron, Jill S.; Blett, T.; Gold, A.J.; Goodman, I.; Gunderson, L.H.; Levinson, B.M.; Palmer, Margaret A.; Paerl, H.W.; Peterson, G.D.; Poff, N.L.; Rejeski, D.W.; Reynolds, J.F.; Turner, M.G.; Weathers, K.C.; Wiens, J.

    2006-01-01

    An ecological threshold is the point at which there is an abrupt change in an ecosystem quality, property or phenomenon, or where small changes in an environmental driver produce large responses in the ecosystem. Analysis of thresholds is complicated by nonlinear dynamics and by multiple factor controls that operate at diverse spatial and temporal scales. These complexities have challenged the use and utility of threshold concepts in environmental management despite great concern about preventing dramatic state changes in valued ecosystems, the need for determining critical pollutant loads and the ubiquity of other threshold-based environmental problems. In this paper we define the scope of the thresholds concept in ecological science and discuss methods for identifying and investigating thresholds using a variety of examples from terrestrial and aquatic environments, at ecosystem, landscape and regional scales. We end with a discussion of key research needs in this area.

  11. Optimal control linear quadratic methods

    CERN Document Server

    Anderson, Brian D O

    2007-01-01

    This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the

  12. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    Science.gov (United States)

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  13. Two percolation thresholds due to geometrical effects: experimental and simulated results

    International Nuclear Information System (INIS)

    Nettelblad, B; Martensson, E; Oenneby, C; Gaefvert, U; Gustafsson, A

    2003-01-01

    The electrical properties of a mixture of ethylene-propylene-diene monomer rubber and silicon carbide (SiC) have been measured as a function of filler concentration. It was found that mixtures containing angular SiC grains have a conductivity that displays not one, but two percolation thresholds. Different types of contacts between the conducting particles, being represented by edge and face connections, respectively, can explain the phenomenon. The two percolation thresholds are obtained at volume fractions of about 0.25 and 0.40, respectively. These values are higher than those predicted by theory, which can be explained by dispersion effects with only one phase being granular and the other being continuous. The value of the conductivity at the central plateau was found to be close to the geometric mean of the limiting conductivities at low and high concentrations. This is in good agreement with theory. With rounded SiC grains only one threshold is obtained, which is consistent with only one type of contact. The concentration dependence of the conductivity was simulated using a three-dimensional impedance network model that incorporates both edge and face contacts. The double-threshold behaviour also appears in the calculations. By dispersing the conducting particles more evenly than random, the thresholds are shifted towards higher concentrations as observed in the experiments

  14. Confirmation of linear system theory prediction: Changes in Herrnstein's k as a function of changes in reinforcer magnitude.

    Science.gov (United States)

    McDowell, J J; Wood, H M

    1984-03-01

    Eight human subjects pressed a lever on a range of variable-interval schedules for 0.25 cent to 35.0 cent per reinforcement. Herrnstein's hyperbola described seven of the eight subjects' response-rate data well. For all subjects, the y-asymptote of the hyperbola increased with increasing reinforcer magnitude and its reciprocal was a linear function of the reciprocal of reinforcer magnitude. These results confirm predictions made by linear system theory; they contradict formal properties of Herrnstein's account and of six other mathematical accounts of single-alternative responding.

  15. On the Generalization of the Timoshenko Beam Model Based on the Micropolar Linear Theory: Static Case

    Directory of Open Access Journals (Sweden)

    Andrea Nobili

    2015-01-01

    Full Text Available Three generalizations of the Timoshenko beam model according to the linear theory of micropolar elasticity or its special cases, that is, the couple stress theory or the modified couple stress theory, recently developed in the literature, are investigated and compared. The analysis is carried out in a variational setting, making use of Hamilton’s principle. It is shown that both the Timoshenko and the (possibly modified couple stress models are based on a microstructural kinematics which is governed by kinosthenic (ignorable terms in the Lagrangian. Despite their difference, all models bring in a beam-plane theory only one microstructural material parameter. Besides, the micropolar model formally reduces to the couple stress model upon introducing the proper constraint on the microstructure kinematics, although the material parameter is generally different. Line loading on the microstructure results in a nonconservative force potential. Finally, the Hamiltonian form of the micropolar beam model is derived and the canonical equations are presented along with their general solution. The latter exhibits a general oscillatory pattern for the microstructure rotation and stress, whose behavior matches the numerical findings.

  16. Matrix theory from generalized inverses to Jordan form

    CERN Document Server

    Piziak, Robert

    2007-01-01

    Each chapter ends with a list of references for further reading. Undoubtedly, these will be useful for anyone who wishes to pursue the topics deeper. … the book has many MATLAB examples and problems presented at appropriate places. … the book will become a widely used classroom text for a second course on linear algebra. It can be used profitably by graduate and advanced level undergraduate students. It can also serve as an intermediate course for more advanced texts in matrix theory. This is a lucidly written book by two authors who have made many contributions to linear and multilinear algebra.-K.C. Sivakumar, IMAGE, No. 47, Fall 2011Always mathematically constructive, this book helps readers delve into elementary linear algebra ideas at a deeper level and prepare for further study in matrix theory and abstract algebra.-L'enseignement Mathématique, January-June 2007, Vol. 53, No. 1-2.

  17. A generic double-curvature piezoelectric shell energy harvester: Linear/nonlinear theory and applications

    Science.gov (United States)

    Zhang, X. F.; Hu, S. D.; Tzou, H. S.

    2014-12-01

    Converting vibration energy to useful electric energy has attracted much attention in recent years. Based on the electromechanical coupling of piezoelectricity, distributed piezoelectric zero-curvature type (e.g., beams and plates) energy harvesters have been proposed and evaluated. The objective of this study is to develop a generic linear and nonlinear piezoelectric shell energy harvesting theory based on a double-curvature shell. The generic piezoelectric shell energy harvester consists of an elastic double-curvature shell and piezoelectric patches laminated on its surface(s). With a current model in the closed-circuit condition, output voltages and energies across a resistive load are evaluated when the shell is subjected to harmonic excitations. Steady-state voltage and power outputs across the resistive load are calculated at resonance for each shell mode. The piezoelectric shell energy harvesting mechanism can be simplified to shell (e.g., cylindrical, conical, spherical, paraboloidal, etc.) and non-shell (beam, plate, ring, arch, etc.) distributed harvesters using two Lamé parameters and two curvature radii of the selected harvester geometry. To demonstrate the utility and simplification procedures, the generic linear/nonlinear shell energy harvester mechanism is simplified to three specific structures, i.e., a cantilever beam case, a circular ring case and a conical shell case. Results show the versatility of the generic linear/nonlinear shell energy harvesting mechanism and the validity of the simplification procedures.

  18. Adiabatic theory of Wannier threshold laws and ionization cross sections

    International Nuclear Information System (INIS)

    Macek, J.H.; Ovchinnikov, S.Y.

    1995-01-01

    The Wannier threshold law for three-particle fragmentation is reviewed. By integrating the Schroedinger equation along a path where the reaction coordinate R is complex, anharmonic corrections to the simple power law are obtained. These corrections are found to be non-analytic in the energy E, in contrast to the expected analytic dependence upon E. copyright 1995 American Institute of Physics

  19. Time-dependent density functional theory of open quantum systems in the linear-response regime.

    Science.gov (United States)

    Tempel, David G; Watson, Mark A; Olivares-Amaya, Roberto; Aspuru-Guzik, Alán

    2011-02-21

    Time-dependent density functional theory (TDDFT) has recently been extended to describe many-body open quantum systems evolving under nonunitary dynamics according to a quantum master equation. In the master equation approach, electronic excitation spectra are broadened and shifted due to relaxation and dephasing of the electronic degrees of freedom by the surrounding environment. In this paper, we develop a formulation of TDDFT linear-response theory (LR-TDDFT) for many-body electronic systems evolving under a master equation, yielding broadened excitation spectra. This is done by mapping an interacting open quantum system onto a noninteracting open Kohn-Sham system yielding the correct nonequilibrium density evolution. A pseudoeigenvalue equation analogous to the Casida equations of the usual LR-TDDFT is derived for the Redfield master equation, yielding complex energies and Lamb shifts. As a simple demonstration, we calculate the spectrum of a C(2 +) atom including natural linewidths, by treating the electromagnetic field vacuum as a photon bath. The performance of an adiabatic exchange-correlation kernel is analyzed and a first-order frequency-dependent correction to the bare Kohn-Sham linewidth based on the Görling-Levy perturbation theory is calculated.

  20. Phantom solution in a non-linear Israel-Stewart theory

    Science.gov (United States)

    Cruz, Miguel; Cruz, Norman; Lepe, Samuel

    2017-06-01

    In this paper we present a phantom solution with a big rip singularity in a non-linear regime of the Israel-Stewart formalism. In this framework it is possible to extend this causal formalism in order to describe accelerated expansion, where assumption of near equilibrium is no longer valid. We assume a flat universe filled with a single viscous fluid ruled by a barotropic EoS, p = ωρ, which can represent a late time accelerated phase of the cosmic evolution. The solution allows to cross the phantom divide without evoking an exotic matter fluid and the effective EoS parameter is always lesser than -1 and constant in time.

  1. Linear algebra

    CERN Document Server

    Stoll, R R

    1968-01-01

    Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand

  2. Multipole surface solitons supported by the interface between linear media and nonlocal nonlinear media

    International Nuclear Information System (INIS)

    Shi, Zhiwei; Li, Huagang; Guo, Qi

    2012-01-01

    We address multipole surface solitons occurring at the interface between a linear medium and a nonlocal nonlinear medium. We show the impact of nonlocality, the propagation constant, and the linear index difference of two media on the properties of the surface solitons. We find that there exist a threshold value of the degree of the nonlocality at the same linear index difference of two media, only when the degree of the nonlocality goes beyond the value, the multipole surface solitons can be stable. -- Highlights: ► We show the impact of nonlocality and the linear index difference of two media on the properties of the surface solitons. ► For the surface solitons, only when the degree of the nonlocality goes beyond a threshold value, they can be stable. ► The number of poles and the index difference of two media can all influence the threshold value.

  3. Fault tolerance in parity-state linear optical quantum computing

    International Nuclear Information System (INIS)

    Hayes, A. J. F.; Ralph, T. C.; Haselgrove, H. L.; Gilchrist, Alexei

    2010-01-01

    We use a combination of analytical and numerical techniques to calculate the noise threshold and resource requirements for a linear optical quantum computing scheme based on parity-state encoding. Parity-state encoding is used at the lowest level of code concatenation in order to efficiently correct errors arising from the inherent nondeterminism of two-qubit linear-optical gates. When combined with teleported error-correction (using either a Steane or Golay code) at higher levels of concatenation, the parity-state scheme is found to achieve a saving of approximately three orders of magnitude in resources when compared to the cluster state scheme, at a cost of a somewhat reduced noise threshold.

  4. Interlocking-induced stiffness in stochastically microcracked materials beyond the transport percolation threshold

    Science.gov (United States)

    Picu, R. C.; Pal, A.; Lupulescu, M. V.

    2016-04-01

    We study the mechanical behavior of two-dimensional, stochastically microcracked continua in the range of crack densities close to, and above, the transport percolation threshold. We show that these materials retain stiffness up to crack densities much larger than the transport percolation threshold due to topological interlocking of sample subdomains. Even with a linear constitutive law for the continuum, the mechanical behavior becomes nonlinear in the range of crack densities bounded by the transport and stiffness percolation thresholds. The effect is due to the fractal nature of the fragmentation process and is not linked to the roughness of individual cracks.

  5. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  6. Linear algebra

    CERN Document Server

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  7. Photocatalytic NO_x abatement. Theory, applications, current research, and limitations

    International Nuclear Information System (INIS)

    Bloh, Jonathan Z.

    2017-01-01

    Nitrogen oxides are one of the major air pollutants that threaten our air quality and health. As a consequence, increasingly stricter regulations are in place forcing action to reduce the concentration of these dangerous compounds. Conventional methods of reducing the NO_x pollution level are reducing the emission directly at the source or restrictive measures such as low emission zones. However, there are recent reports questioning the efficacy of the strategy to reduce ambient NO_x levels solely by reducing their emissions and existing threshold values are still frequently exceeded in many European cities. Semiconductor photocatalysis presents an appealing alternative capable of removing NO_x and other air pollutants from the air once they have already been released and dispersed. Recent field tests have shown that a reduction of a few percent in NO_x values is possible with available photocatalysts. Current research focuses on further increasing the catalysts' efficacy as well as their selectivity to suppress the formation of undesired by-products. Especially using these improved materials, photocatalytic NO_x abatement could prove a very valuable contributor to better air quality.

  8. Hyper-arousal decreases human visual thresholds.

    Directory of Open Access Journals (Sweden)

    Adam J Woods

    Full Text Available Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0-2° C water, a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1 and contrast thresholds (Experiment 2 were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.

  9. Analysis of ecological thresholds in a temperate forest undergoing dieback.

    Directory of Open Access Journals (Sweden)

    Philip Martin

    Full Text Available Positive feedbacks in drivers of degradation can cause threshold responses in natural ecosystems. Though threshold responses have received much attention in studies of aquatic ecosystems, they have been neglected in terrestrial systems, such as forests, where the long time-scales required for monitoring have impeded research. In this study we explored the role of positive feedbacks in a temperate forest that has been monitored for 50 years and is undergoing dieback, largely as a result of death of the canopy dominant species (Fagus sylvatica, beech. Statistical analyses showed strong non-linear losses in basal area for some plots, while others showed relatively gradual change. Beech seedling density was positively related to canopy openness, but a similar relationship was not observed for saplings, suggesting a feedback whereby mortality in areas with high canopy openness was elevated. We combined this observation with empirical data on size- and growth-mediated mortality of trees to produce an individual-based model of forest dynamics. We used this model to simulate changes in the structure of the forest over 100 years under scenarios with different juvenile and mature mortality probabilities, as well as a positive feedback between seedling and mature tree mortality. This model produced declines in forest basal area when critical juvenile and mature mortality probabilities were exceeded. Feedbacks in juvenile mortality caused a greater reduction in basal area relative to scenarios with no feedback. Non-linear, concave declines of basal area occurred only when mature tree mortality was 3-5 times higher than rates observed in the field. Our results indicate that the longevity of trees may help to buffer forests against environmental change and that the maintenance of old, large trees may aid the resilience of forest stands. In addition, our work suggests that dieback of forests may be avoidable providing pressures on mature and juvenile trees do

  10. Lattice cluster theory of associating polymers. I. Solutions of linear telechelic polymer chains.

    Science.gov (United States)

    Dudowicz, Jacek; Freed, Karl F

    2012-02-14

    The lattice cluster theory (LCT) for the thermodynamics of a wide array of polymer systems has been developed by using an analogy to Mayer's virial expansions for non-ideal gases. However, the high-temperature expansion inherent to the LCT has heretofore precluded its application to systems exhibiting strong, specific "sticky" interactions. The present paper describes a reformulation of the LCT necessary to treat systems with both weak and strong, "sticky" interactions. This initial study concerns solutions of linear telechelic chains (with stickers at the chain ends) as the self-assembling system. The main idea behind this extension of the LCT lies in the extraction of terms associated with the strong interactions from the cluster expansion. The generalized LCT for sticky systems reduces to the quasi-chemical theory of hydrogen bonding of Panyioutou and Sanchez when correlation corrections are neglected in the LCT. A diagrammatic representation is employed to facilitate the evaluation of the corrections to the zeroth-order approximation from short range correlations. © 2012 American Institute of Physics

  11. Applicability of linearized-theory attached-flow methods to design and analysis of flap systems at low speeds for thin swept wings with sharp leading edges

    Science.gov (United States)

    Carlson, Harry W.; Darden, Christine M.

    1987-01-01

    Low-speed experimental force and data on a series of thin swept wings with sharp leading edges and leading and trailing-edge flaps are compared with predictions made using a linearized-theory method which includes estimates of vortex forces. These comparisons were made to assess the effectiveness of linearized-theory methods for use in the design and analysis of flap systems in subsonic flow. Results demonstrate that linearized-theory, attached-flow methods (with approximate representation of vortex forces) can form the basis of a rational system for flap design and analysis. Even attached-flow methods that do not take vortex forces into account can be used for the selection of optimized flap-system geometry, but design-point performance levels tend to be underestimated unless vortex forces are included. Illustrative examples of the use of these methods in the design of efficient low-speed flap systems are included.

  12. State space and input-output linear systems

    CERN Document Server

    Delchamps, David F

    1988-01-01

    It is difficult for me to forget the mild sense of betrayal I felt some ten years ago when I discovered, with considerable dismay, that my two favorite books on linear system theory - Desoer's Notes for a Second Course on Linear Systems and Brockett's Finite Dimensional Linear Systems - were both out of print. Since that time, of course, linear system theory has undergone a transformation of the sort which always attends the maturation of a theory whose range of applicability is expanding in a fashion governed by technological developments and by the rate at which such advances become a part of engineering practice. The growth of the field has inspired the publication of some excellent books; the encyclopedic treatises by Kailath and Chen, in particular, come immediately to mind. Nonetheless, I was inspired to write this book primarily by my practical needs as a teacher and researcher in the field. For the past five years, I have taught a one semester first year gradu­ ate level linear system theory course i...

  13. Computational gestalts and perception thresholds.

    Science.gov (United States)

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  14. Threshold Studies of the Microwave Instability in Electron Storage Rings

    International Nuclear Information System (INIS)

    Bane, Karl

    2010-01-01

    We use a Vlasov-Fokker-Planck program and a linearized Vlasov solver to study the microwave instability threshold of impedance models: (1) a Q = 1 resonator and (2) shielded coherent synchrotron radiation (CSR), and find the results of the two programs agree well. For shielded CSR we show that only two dimensionless parameters, the shielding parameter Π and the strength parameter S csr , are needed to describe the system. We further show that there is a strong instability associated with CSR, and that the threshold, to good approximation, is given by (S csr )th = 0.5 + 0.12Π. In particular, this means that shielding has little effect in stabilizing the beam for Π ∼ -3/2 . We, in addition, find another instability in the vicinity of Π = 0.7 with a lower threshold, (S csr ) th ∼ 0.2. We find that the threshold to this instability depends strongly on damping time, (S csr ) th ∼ τ p -1/2 , and that the tune spread at threshold is small - both hallmarks of a weak instability.

  15. Justifying threshold voltage definition for undoped body transistors through 'crossover point' concept

    International Nuclear Information System (INIS)

    Baruah, Ratul Kumar; Mahapatra, Santanu

    2009-01-01

    Two different definitions, one is potential based and the other is charge based, are used in the literatures to define the threshold voltage of undoped body symmetric double gate transistors. This paper, by introducing a novel concept of crossover point, proves that the charge based definition is more accurate than the potential based definition. It is shown that for a given channel length the potential based definition predicts anomalous change in threshold voltage with body thickness variation while the charge based definition results in monotonous change. The threshold voltage is then extracted from drain current versus gate voltage characteristics using linear extrapolation, transconductance and match-point methods. In all the three cases it is found that trend of threshold voltage variation support the charge based definition.

  16. Testing for a Debt-Threshold Effect on Output Growth.

    Science.gov (United States)

    Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki

    2017-12-01

    Using the Reinhart-Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post-war sample suggest that the debt threshold for economic growth may exist around a relatively small debt-to-GDP ratio of 30 per cent. Furthermore, countries with debt-to-GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median.

  17. Repair and dose-response at low doses

    International Nuclear Information System (INIS)

    Totter, J.R.; Weinberg, A.M.

    1977-04-01

    The DNA of each individual is subject to formation of some 2-4 x 10 14 ion pairs during the first 30 years of life from background radiation. If a single hit is sufficient to cause cancer, as is implicit in the linear, no-threshold theories, it is unclear why all individuals do not succumb to cancer, unless repair mechanisms operate to remove the damage. We describe a simple model in which the exposed population displays a distribution of repair thresholds. The dose-response at low dose is shown to depend on the shape of the threshold distribution at low thresholds. If the probability of zero threshold is zero, the response at low dose is quadratic. The model is used to resolve a longstanding discrepancy between observed incidence of leukemia at Nagasaki and the predictions of the usual linear hypothesis

  18. Universal squash model for optical communications using linear optics and threshold detectors

    International Nuclear Information System (INIS)

    Fung, Chi-Hang Fred; Chau, H. F.; Lo, Hoi-Kwong

    2011-01-01

    Transmission of photons through open-air or optical fibers is an important primitive in quantum-information processing. Theoretical descriptions of this process often consider single photons as information carriers and thus fail to accurately describe experimental implementations where any number of photons may enter a detector. It has been a great challenge to bridge this big gap between theory and experiments. One powerful method for achieving this goal is by conceptually squashing the received multiphoton states to single-photon states. However, until now, only a few protocols admit a squash model; furthermore, a recently proven no-go theorem appears to rule out the existence of a universal squash model. Here we show that a necessary condition presumed by all existing squash models is in fact too stringent. By relaxing this condition, we find that, rather surprisingly, a universal squash model actually exists for many protocols, including quantum key distribution, quantum state tomography, Bell's inequality testing, and entanglement verification.

  19. Threshold law for electron impact ionization in the model of Temkin and Poet

    International Nuclear Information System (INIS)

    Macek, J.H.

    1996-01-01

    The angle-Sturmian theory is used to derive the threshold law for ionization of atomic hydrogen by electron impact in the model of Temkin and Poet. In this model, the exact electron-electron interaction is replaced by its monopole term. As for Wannier's theory with the real interaction, ionization occurs only for electrons that start out nearly equidistant from the proton. Because there is a high propensity for one electron to be captured into a bound state, ionization is strongly suppressed, giving rise to a threshold law of the form σ ∝ exp[-aE -1/6 + bE 1/6 ], where a and b are constants. The exponential law appears to be the quantal counterpart of the classical offset of the ionization threshold. Relative energy distribution are computed and found to favor configurations with unequal energy sharing

  20. Scaling relation for determining the critical threshold for continuum ...

    Indian Academy of Sciences (India)

    E-mail: cb.ajit@iiserpune.ac.in (Ajit C Balram); ddhar@theory.tifr.res.in ... recent accurate Monte Carlo estimates of critical threshold by Quintanilla and Ziff [Phys ... probability that a given small areal element dA contains the centre of a dropped.

  1. Linear programming

    CERN Document Server

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  2. Bedding material affects mechanical thresholds, heat thresholds and texture preference

    Science.gov (United States)

    Moehring, Francie; O’Hara, Crystal L.; Stucky, Cheryl L.

    2015-01-01

    It has long been known that the bedding type animals are housed on can affect breeding behavior and cage environment. Yet little is known about its effects on evoked behavior responses or non-reflexive behaviors. C57BL/6 mice were housed for two weeks on one of five bedding types: Aspen Sani Chips® (standard bedding for our institute), ALPHA-Dri®, Cellu-Dri™, Pure-o’Cel™ or TEK-Fresh. Mice housed on Aspen exhibited the lowest (most sensitive) mechanical thresholds while those on TEK-Fresh exhibited 3-fold higher thresholds. While bedding type had no effect on responses to punctate or dynamic light touch stimuli, TEK-Fresh housed animals exhibited greater responsiveness in a noxious needle assay, than those housed on the other bedding types. Heat sensitivity was also affected by bedding as animals housed on Aspen exhibited the shortest (most sensitive) latencies to withdrawal whereas those housed on TEK-Fresh had the longest (least sensitive) latencies to response. Slight differences between bedding types were also seen in a moderate cold temperature preference assay. A modified tactile conditioned place preference chamber assay revealed that animals preferred TEK-Fresh to Aspen bedding. Bedding type had no effect in a non-reflexive wheel running assay. In both acute (two day) and chronic (5 week) inflammation induced by injection of Complete Freund’s Adjuvant in the hindpaw, mechanical thresholds were reduced in all groups regardless of bedding type, but TEK-Fresh and Pure-o’Cel™ groups exhibited a greater dynamic range between controls and inflamed cohorts than Aspen housed mice. PMID:26456764

  3. Cosmic no-hair conjecture in scalar–tensor theories

    Indian Academy of Sciences (India)

    We have shown that, within the context of scalar–tensor theories, the anisotropic Bianchi-type cosmological models evolve towards de Sitter Universe. A similar result holds in the case of cosmology in Lyra manifold. Thus the analogue of cosmic no-hair theorem of Wald [1] hold in both the cases. In fact, during inflation there ...

  4. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  5. Scattering theory

    CERN Document Server

    Friedrich, Harald

    2016-01-01

    This corrected and updated second edition of "Scattering Theory" presents a concise and modern coverage of the subject. In the present treatment, special attention is given to the role played by the long-range behaviour of the projectile-target interaction, and a theory is developed, which is well suited to describe near-threshold bound and continuum states in realistic binary systems such as diatomic molecules or molecular ions. It is motivated by the fact that experimental advances have shifted and broadened the scope of applications where concepts from scattering theory are used, e.g. to the field of ultracold atoms and molecules, which has been experiencing enormous growth in recent years, largely triggered by the successful realization of Bose-Einstein condensates of dilute atomic gases in 1995. The book contains sections on special topics such as near-threshold quantization, quantum reflection, Feshbach resonances and the quantum description of scattering in two dimensions. The level of abstraction is k...

  6. Study of the Hearing Threshold of Dance Teachers

    Directory of Open Access Journals (Sweden)

    Nehring, Cristiane

    2015-03-01

    Full Text Available Introduction High sound pressure levels can cause hearing loss, beginning at high frequencies. Objective To analyze the hearing thresholds of dance teachers. Methods This study had a cross-sectional, observational, prospective, and descriptive design. Conventional and high-frequency hearing evaluations were performed with dance teachers and subjects in the control group. Results In all, 64 individuals were assessed, 32 in the research group and 32 in the control group. Results showed that individuals in the research group had hearing loss at frequencies between 4 and 8 kHz, but no significant difference was found between groups. Frequency analysis showed that individuals in the control group had higher thresholds than individuals in the research group at the frequency of 0.25 kHz. In the control group, men showed higher thresholds than women at the frequency of 9 kHz. Conclusion A low prevalence of hearing loss was found, with no difference between teachers and subjects from the control group. No difference was found for hearing thresholds at high frequencies between groups. Results have been partially affected by sex.

  7. Research on linear driving of wave maker; Zoha sochi no linear drive ka kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, I; Taniguchi, S; Nohara, T [Mitsubishi Heavy Industries, Ltd., Tokyo (Japan)

    1997-10-01

    The water tank test of marine structures or submarine structures uses a wave maker to generate waves. A typical flap wave maker uses the wave making flap penetrating a water surface whose bottom is fixed on a tank bottom through a hinge, and the top is connected with a rod driven by rotating servomotor for reciprocating motion of the flap. However, this driving gear using a rotating servomotor and a bowl- screw has some defects such as noise caused by bowl rotation, backlash due to wear and limited driving speed. A linear motor with less friction mechanisms was thus applied to the driving gear. The performance test result of the prototype driving gear using a linear motor showed the possibility of the linear driven wave maker. The linear driven wave maker could also achieve low noise and simple mechanism. The sufficient durability and applicability of the linear driven wave maker mechanism were confirmed through strength calculation necessary for improving the prototype wave maker. 1 ref., 5 figs., 2 tabs.

  8. Linear dose response curves in fungi and tradescantia

    International Nuclear Information System (INIS)

    Unrau, P.

    1999-07-01

    heterozygosity (LOH) events occur because Clone 02 repairs both DSB and LCD by recombination. Clone 02 has a linear dose response for high LET radiation. Starting from the same initial yieId frequency, wild-types have a sublinear response. The sublinear response reflects a smoothly decreasing probability that 'pinks' are generated as a function of increasing high LET dose for wild-type but not Clone 02. This smoothly decreasing response would be expected for LOH in 'wild-type' humans. It reflects an increasing proportion of DNA damage being repaired by non-recombinational pathways and/or an increasing probability of cell death with increasing dose. Clone 02 at low doses and low dose rates of low LET radiation has a linear dose response, reflecting a 1/16 probability of a lesion leading to LOH, relative to high LET lesions. This differential is held to reflect: microdosimetric differences in energy deposition and, therefore, DNA damage by low and high LET radiations; the effects of lesion clustering after high LET on the probability of generating the end wild-types. While no observations have been made at very low doses and dose rates in wild-types, there is no reason to suppose that the low LET linear non-threshold dose response of Clone 02 is abnormal. The importance of the LOH somatic genetic end-point is that it reflects cancer risk in humans. The linear non-threshold low dose low LET response curves reflects either the probability that recombinational Holliday junctions are occasionally cleaved in a rare orientation to generate LOH, or the probability that low LET lesions include a small proportion of clustered events similar to high LET ionization or both. Calculations of the Poisson probability that two or more low LET lesions will be induced in the same target suggest that dose rate effects depend upon the coincidence of DNA lesions in the same target, and that the probability of LOH depends upon lesion and repair factors. But the slope of LOH in Clone 02 and all other

  9. Linear dose response curves in fungi and tradescantia

    Energy Technology Data Exchange (ETDEWEB)

    Unrau, P. [Atomic Energy of Canada Ltd., Chalk River, Ontario (Canada)

    1999-07-15

    ;pink' loss of heterozygosity (LOH) events occur because Clone 02 repairs both DSB and LCD by recombination. Clone 02 has a linear dose response for high LET radiation. Starting from the same initial yieId frequency, wild-types have a sublinear response. The sublinear response reflects a smoothly decreasing probability that 'pinks' are generated as a function of increasing high LET dose for wild-type but not Clone 02. This smoothly decreasing response would be expected for LOH in 'wild-type' humans. It reflects an increasing proportion of DNA damage being repaired by non-recombinational pathways and/or an increasing probability of cell death with increasing dose. Clone 02 at low doses and low dose rates of low LET radiation has a linear dose response, reflecting a 1/16 probability of a lesion leading to LOH, relative to high LET lesions. This differential is held to reflect: microdosimetric differences in energy deposition and, therefore, DNA damage by low and high LET radiations; the effects of lesion clustering after high LET on the probability of generating the end wild-types. While no observations have been made at very low doses and dose rates in wild-types, there is no reason to suppose that the low LET linear non-threshold dose response of Clone 02 is abnormal. The importance of the LOH somatic genetic end-point is that it reflects cancer risk in humans. The linear non-threshold low dose low LET response curves reflects either the probability that recombinational Holliday junctions are occasionally cleaved in a rare orientation to generate LOH, or the probability that low LET lesions include a small proportion of clustered events similar to high LET ionization or both. Calculations of the Poisson probability that two or more low LET lesions will be induced in the same target suggest that dose rate effects depend upon the coincidence of DNA lesions in the same target, and that the probability of LOH depends upon lesion and repair factors. But the

  10. Many-body effects in photoreactions of light nuclei below pion threshold

    International Nuclear Information System (INIS)

    Cavinato, M.; Marangoni, M.; Saruis, A.M.

    1983-01-01

    In the present paper it is discussed the reaction mechanism in photoabsorption of light nuclei below pion threshold in the frame of a self-consistent RPA theory with a Skyrme force. The role of both exchange currents in electromagnetic operators and two-body correlations in the nuclear wave function has been studied in the RPA formalism. Exchange currents in RPA calculations are related to the effective mass in the Hartree-Fock field. Comparison is made between the RPA formalism and the Gari and Hebach theory. The relative contribution of exchange currents and nuclear correlations to the photoreaction of 16 O is evaluated from proton threshold up to 80 MeV. E1 and E2 multipoles are included in the calculation

  11. Linear and Nonlinear Theories of Cosmic Ray Transport

    International Nuclear Information System (INIS)

    Shalchi, A.

    2005-01-01

    The transport of charged cosmic rays in plasmawave turbulence is a modern and interesting field of research. We are mainly interested in spatial diffusion parallel and perpendicular to a large scale magnetic field. During the last decades quasilinear theory was the standard tool for the calculation of diffusion coefficients. Through comparison with numerical simulations we found several cases where quasilinear theory is invalid. On could define three major problems of transport theory. I will demonstrate that new nonlinear theories which were proposed recently can solve at least some to these problems

  12. Holographic collisions in confining theories

    International Nuclear Information System (INIS)

    Cardoso, Vitor; Emparan, Roberto; Mateos, David; Pani, Paolo; Rocha, Jorge V.

    2014-01-01

    We study the gravitational dual of a high-energy collision in a confining gauge theory. We consider a linearized approach in which two point particles traveling in an AdS-soliton background suddenly collide to form an object at rest (presumably a black hole for large enough center-of-mass energies). The resulting radiation exhibits the features expected in a theory with a mass gap: late-time power law tails of the form t −3/2 , the failure of Huygens’ principle and distortion of the wave pattern as it propagates. The energy spectrum is exponentially suppressed for frequencies smaller than the gauge theory mass gap. Consequently, we observe no memory effect in the gravitational waveforms. At larger frequencies the spectrum has an upward-stairway structure, which corresponds to the excitation of the tower of massive states in the confining gauge theory. We discuss the importance of phenomenological cutoffs to regularize the divergent spectrum, and the aspects of the full non-linear collision that are expected to be captured by our approach

  13. Gauge threshold corrections for local orientifolds

    International Nuclear Information System (INIS)

    Conlon, Joseph P.; Palti, Eran

    2009-01-01

    We study gauge threshold corrections for systems of fractional branes at local orientifold singularities and compare with the general Kaplunovsky-Louis expression for locally supersymmetric N = 1 gauge theories. We focus on branes at orientifolds of the C 3 /Z 4 , C 3 /Z 6 and C 3 /Z 6 ' singularities. We provide a CFT construction of these theories and compute the threshold corrections. Gauge coupling running undergoes two phases: one phase running from the bulk winding scale to the string scale, and a second phase running from the string scale to the infrared. The first phase is associated to the contribution of N = 2 sectors to the IR β functions and the second phase to the contribution of both N = 1 and N = 2 sectors. In contrast, naive application of the Kaplunovsky-Louis formula gives single running from the bulk winding mode scale. The discrepancy is resolved through 1-loop non-universality of the holomorphic gauge couplings at the singularity, induced by a 1-loop redefinition of the twisted blow-up moduli which couple differently to different gauge nodes. We also study the physics of anomalous and non-anomalous U(1)s and give a CFT description of how masses for non-anomalous U(1)s depend on the global properties of cycles.

  14. Alternative method for determining anaerobic threshold in rowers

    Directory of Open Access Journals (Sweden)

    Giovani Dos Santos Cunha

    2008-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2008v10n4p367 In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detect ventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposed determining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what value of RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic threshold and whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. The sample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer with concurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobic threshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variables were classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2 (58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic threshold they reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions - the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of 0.99, however, RER should exhibit a non-linear increase above this figure.

  15. Threshold condition for nonlinear tearing modes in tokamaks

    International Nuclear Information System (INIS)

    Zabiego, M.F.; Callen, J.D.

    1996-03-01

    Low-mode-number tearing, mode nonlinear evolution is analyzed emphasizing the need for a threshold condition, to account for observations in tokamaks. The discussion is illustrated by two models recently introduced in the literature. The models can be compared with the available data and/or serve as a basis for planning some experiments in order to either test theory (by means of beta-limit scaling laws, as proposed in this paper) or attempt to control undesirable tearing modes. Introducing a threshold condition in the tearing mode stability analysis is found to reveal some bifurcation points and thus domains of intrinsic stability in the island dynamics operational space

  16. Linear-stability theory of thermocapillary convection in a model of float-zone crystal growth

    Science.gov (United States)

    Neitzel, G. P.; Chang, K.-T.; Jankowski, D. F.; Mittelmann, H. D.

    1992-01-01

    Linear-stability theory has been applied to a basic state of thermocapillary convection in a model half-zone to determine values of the Marangoni number above which instability is guaranteed. The basic state must be determined numerically since the half-zone is of finite, O(1) aspect ratio with two-dimensional flow and temperature fields. This, in turn, means that the governing equations for disturbance quantities will remain partial differential equations. The disturbance equations are treated by a staggered-grid discretization scheme. Results are presented for a variety of parameters of interest in the problem, including both terrestrial and microgravity cases.

  17. Relativistic Multichannel Treatment of Krypton Spectra across the First Ionization Threshold

    Institute of Scientific and Technical Information of China (English)

    QU Yi-Zhi; PENG Yong-Lun

    2005-01-01

    @@ The relativistic multichannel theory has been extended to calculate both the eigen quantum defects μα, transformation matrix Uiα, and the eigen dipole matrix elements Dα of krypton. The Rydberg and autoionizationspectra of krypton across the first ionization threshold are calculated within the framework of multichannel quantum defect theory. Our calculated spectra are in agreement with the absolute measurement data.

  18. Basic operator theory

    CERN Document Server

    Gohberg, Israel

    2001-01-01

    rii application of linear operators on a Hilbert space. We begin with a chapter on the geometry of Hilbert space and then proceed to the spectral theory of compact self adjoint operators; operational calculus is next presented as a nat­ ural outgrowth of the spectral theory. The second part of the text concentrates on Banach spaces and linear operators acting on these spaces. It includes, for example, the three 'basic principles of linear analysis and the Riesz­ Fredholm theory of compact operators. Both parts contain plenty of applications. All chapters deal exclusively with linear problems, except for the last chapter which is an introduction to the theory of nonlinear operators. In addition to the standard topics in functional anal­ ysis, we have presented relatively recent results which appear, for example, in Chapter VII. In general, in writ­ ing this book, the authors were strongly influenced by re­ cent developments in operator theory which affected the choice of topics, proofs and exercises. One ...

  19. Pathway to a paradigm: the linear nonthreshold dose-response model in historical context. The American Academy of Health Physics 1995 Radiology Centennial Hartman Oration.

    Science.gov (United States)

    Kathren, R L

    1996-05-01

    This paper traces the evolution of the linear nonthreshold dose-response model and its acceptance as a paradigm in radiation protection practice and risk analysis. Deterministic effects such as skin burns and even deep tissue trauma were associated with excessive exposure to x rays shortly after their discovery, and carcinogenicity was observed as early as 1902. Still, it was not until 1925 that the first protective limits were suggested. For three decades these limits were based on the concept of a tolerance dose which, if not exceeded, would result in no demonstrable harm to the individual and implicitly assumed a threshold dose below which radiation effects would be absent. After World War II, largely because of genetic concerns related to atmospheric weapons testing, radiation protection dose limits were expressed in terms of a risk based maximum permissible dose which clearly implied no threshold. The 1927 discovery by Muller of x-ray induced genetic mutations in fruit flies, linear with dose and with no apparent threshold, was an important underpinning of the standards. The linear nonthreshold dose-response model was originally used to provide an upper limit estimate of the risk, with zero being the lower limit, of low level irradiation since the dose-response curve could not be determined at low dose levels. Evidence to the contrary such as hormesis and the classic studies of the radium dial painters notwithstanding, the linear nonthreshold model gained greater acceptance and in the centennial year of the discovery of x rays stands as a paradigm although serious questions are beginning to be raised regarding its general applicability. The work includes a brief digression describing the work of x-ray protection pioneer William Rollins and concludes with a recommendation for application of a de minimis dose level in radiation protection.

  20. Preferential access to emotion under attentional blink: evidence for threshold phenomenon

    Directory of Open Access Journals (Sweden)

    Szczepanowski Remigiusz

    2015-03-01

    Full Text Available The present study provides evidence that the activation strength produced by emotional stimuli must pass a threshold level in order to be consciously perceived, contrary to the assumption of continuous quality of representation. An analysis of receiver operating characteristics (ROC for attentional blink performance was used to distinguish between two (continuous vs. threshold models of emotion perception by inspecting two different ROC’s shapes. Across all conditions, the results showed that performance in the attentional blink task was better described by the two-limbs ROC predicted by the Krantz threshold model than by the curvilinear ROC implied by the signal-detection theory.

  1. A factorization approach to next-to-leading-power threshold logarithms

    Energy Technology Data Exchange (ETDEWEB)

    Bonocore, D. [Nikhef,Science Park 105, NL-1098 XG Amsterdam (Netherlands); Laenen, E. [Nikhef,Science Park 105, NL-1098 XG Amsterdam (Netherlands); ITFA, University of Amsterdam,Science Park 904, Amsterdam (Netherlands); ITF, Utrecht University,Leuvenlaan 4, Utrecht (Netherlands); Magnea, L. [Dipartimento di Fisica, Università di Torino and INFN, Sezione di Torino,Via P. Giuria 1, I-10125, Torino (Italy); Melville, S. [School of Physics and Astronomy, University of Glasgow,Glasgow, G12 8QQ (United Kingdom); Vernazza, L. [Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh,Edinburgh, EH9 3JZ, Scotland (United Kingdom); White, C.D. [School of Physics and Astronomy, University of Glasgow,Glasgow, G12 8QQ (United Kingdom)

    2015-06-03

    Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.

  2. Sparse signals recovered by non-convex penalty in quasi-linear systems.

    Science.gov (United States)

    Cui, Angang; Li, Haiyang; Wen, Meng; Peng, Jigen

    2018-01-01

    The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the [Formula: see text]-norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-convex fraction function [Formula: see text] in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem [Formula: see text] for all [Formula: see text]. With the change of parameter [Formula: see text], our algorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. Numerical experiments show that our method performs much better than some state-of-the-art methods.

  3. Linear analysis near a steady-state of biochemical networks: control analysis, correlation metrics and circuit theory

    Directory of Open Access Journals (Sweden)

    Qian Hong

    2008-05-01

    Full Text Available Abstract Background: Several approaches, including metabolic control analysis (MCA, flux balance analysis (FBA, correlation metric construction (CMC, and biochemical circuit theory (BCT, have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. Results: In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RT BS and ST BS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. Conclusion: One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA.

  4. Importance of the alignment of polar π conjugated molecules inside carbon nanotubes in determining second-order non-linear optical properties.

    Science.gov (United States)

    Yumura, Takashi; Yamamoto, Wataru

    2017-09-20

    We employed density functional theory (DFT) calculations with dispersion corrections to investigate energetically preferred alignments of certain p,p'-dimethylaminonitrostilbene (DANS) molecules inside an armchair (m,m) carbon nanotube (n × DANS@(m,m)), where the number of inner molecules (n) is no greater than 3. Here, three types of alignments of DANS are considered: a linear alignment in a parallel fashion and stacking alignments in parallel and antiparallel fashions. According to DFT calculations, a threshold tube diameter for containing DANS molecules in linear or stacking alignments was found to be approximately 1.0 nm. Nanotubes with diameters smaller than 1.0 nm result in the selective formation of linearly aligned DANS molecules due to strong confinement effects within the nanotubes. By contrast, larger diameter nanotubes allow DANS molecules to align in a stacking and linear fashion. The type of alignment adopted by the DANS molecules inside a nanotube is responsible for their second-order non-linear optical properties represented by their static hyperpolarizability (β 0 values). In fact, we computed β 0 values of DANS assemblies taken from optimized n × DANS@(m,m) structures, and their values were compared with those of a single DANS molecule. DFT calculations showed that β 0 values of DANS molecules depend on their alignment, which decrease in the following order: linear alignment > parallel stacking alignment > antiparallel stacking alignment. In particular, a linear alignment has a β 0 value more significant than that of the same number of isolated molecules. Therefore, the linear alignment of DANS molecules, which is only allowed inside smaller diameter nanotubes, can strongly enhance their second-order non-linear optical properties. Since the nanotube confinement determines the alignment of DANS molecules, a restricted nanospace can be utilized to control their second-order non-linear optical properties. These DFT findings can assist in the

  5. Confirmation of linear system theory prediction: Rate of change of Herrnstein's κ as a function of response-force requirement

    Science.gov (United States)

    McDowell, J. J; Wood, Helena M.

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes (¢/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's κ were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) κ increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of κ was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of κ was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's κ. PMID:16812408

  6. Confirmation of linear system theory prediction: Rate of change of Herrnstein's kappa as a function of response-force requirement.

    Science.gov (United States)

    McDowell, J J; Wood, H M

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes ( cent/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's kappa were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) kappa increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of kappa was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of kappa was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's kappa.

  7. Methods in half-linear asymptotic theory

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel

    2016-01-01

    Roč. 2016, Č. 267 (2016), s. 1-27 ISSN 1072-6691 Institutional support: RVO:67985840 Keywords : half-linear differential equation * nonoscillatory solution * regular variation Subject RIV: BA - General Mathematics Impact factor: 0.954, year: 2016 http://ejde.math.txstate.edu/Volumes/2016/267/abstr.html

  8. Reconciling threshold and subthreshold expansions for pion-nucleon scattering

    Science.gov (United States)

    Siemens, D.; Ruiz de Elvira, J.; Epelbaum, E.; Hoferichter, M.; Krebs, H.; Kubis, B.; Meißner, U.-G.

    2017-07-01

    Heavy-baryon chiral perturbation theory (ChPT) at one loop fails in relating the pion-nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ (1232) as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency between threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1 /mN corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.

  9. Reconciling threshold and subthreshold expansions for pion–nucleon scattering

    Directory of Open Access Journals (Sweden)

    D. Siemens

    2017-07-01

    Full Text Available Heavy-baryon chiral perturbation theory (ChPT at one loop fails in relating the pion–nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ(1232 as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency between threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1/mN corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.

  10. Efficiency using computer simulation of Reverse Threshold Model Theory on assessing a “One Laptop Per Child” computer versus desktop computer

    Directory of Open Access Journals (Sweden)

    Supat Faarungsang

    2017-04-01

    Full Text Available The Reverse Threshold Model Theory (RTMT model was introduced based on limiting factor concepts, but its efficiency compared to the Conventional Model (CM has not been published. This investigation assessed the efficiency of RTMT compared to CM using computer simulation on the “One Laptop Per Child” computer and a desktop computer. Based on probability values, it was found that RTMT was more efficient than CM among eight treatment combinations and an earlier study verified that RTMT gives complete elimination of random error. Furthermore, RTMT has several advantages over CM and is therefore proposed to be applied to most research data.

  11. How well can we reconstruct the t anti t system near its threshold at future e sup + e sup - linear colliders?

    CERN Document Server

    Ikematsu, K; Hioki, Z; Sumino, Y; Takahashi, T

    2003-01-01

    We developed a new method for the full kinematical reconstruction of the t anti t system near its threshold at future linear e sup + e sup - colliders. In the core of the method lies likelihood fitting which is designed to improve measurement accuracies of the kinematical variables that specify the final states resulting from t anti t decays. The improvement is demonstrated by applying this method to a Monte Carlo t anti t sample generated with various experimental effects including beamstrahlung, finite acceptance and resolution of the detector system, etc. In most cases the fit takes a broad non-Gaussian distribution of a given kinematical variable to a nearly Gaussian shape, thereby justifying phenomenological analyses based on simple Gaussian smearing of the parton-level momenta. The standard deviations of the resultant distributions of various kinematical variables are given in order to facilitate such phenomenological analyses. A possible application of the kinematical fitting method and its expected im...

  12. Social Thresholds and their Translation into Social-ecological Management Practices

    Directory of Open Access Journals (Sweden)

    Lisa Christensen

    2012-03-01

    Full Text Available The objective of this paper is to provide a preliminary discussion of how to improve our conceptualization of social thresholds using (1 a more sociological analysis of social resilience, and (2 results from research carried out in collaboration with the Champagne and Aishihik First Nations of the Yukon Territory, Canada. Our sociological analysis of the concept of resilience begins with a review of the literature followed by placement of the concept in the domain of sociological theory to gain insight into its strengths and limitations. A new notion of social thresholds is proposed and case study research discussed to support the proposition. Our findings suggest that rather than view social thresholds as breakpoints between two regimes, as thresholds are typically conceived in the resilience literature, that they be viewed in terms of collectively recognized points that signify new experiences. Some examples of thresholds identified in our case study include power in decision making, level of healing from historical events, and a preference for small-scale development over large capital intensive projects.

  13. Investigation of excimer laser ablation threshold of polymers using a microphone

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Joerg; Niino, Hiroyuki; Yabe, Akira

    2002-09-30

    KrF excimer laser ablation of polyethylene terephthalate (PET), polyimide (PI) and polycarbonate (PC) in air was studied by an in situ monitoring technique using a microphone. The microphone signal generated by a short acoustic pulse represented the etch rate of laser ablation depending on the laser fluence, i.e., the ablation 'strength'. From a linear relationship between the microphone output voltage and the laser fluence, the single-pulse ablation thresholds were found to be 30 mJ cm{sup -2} for PET, 37 mJ cm{sup -2} for PI and 51 mJ cm{sup -2} for PC (20-pulses threshold). The ablation thresholds of PET and PI were not influenced by the number of pulses per spot, while PC showed an incubation phenomenon. A microphone technique provides a simple method to determine the excimer laser ablation threshold of polymer films.

  14. Linear algebraic groups

    CERN Document Server

    Springer, T A

    1998-01-01

    "[The first] ten chapters...are an efficient, accessible, and self-contained introduction to affine algebraic groups over an algebraically closed field. The author includes exercises and the book is certainly usable by graduate students as a text or for self-study...the author [has a] student-friendly style… [The following] seven chapters... would also be a good introduction to rationality issues for algebraic groups. A number of results from the literature…appear for the first time in a text." –Mathematical Reviews (Review of the Second Edition) "This book is a completely new version of the first edition. The aim of the old book was to present the theory of linear algebraic groups over an algebraically closed field. Reading that book, many people entered the research field of linear algebraic groups. The present book has a wider scope. Its aim is to treat the theory of linear algebraic groups over arbitrary fields. Again, the author keeps the treatment of prerequisites self-contained. The material of t...

  15. Spectral analysis of linear relations and degenerate operator semigroups

    International Nuclear Information System (INIS)

    Baskakov, A G; Chernyshov, K I

    2002-01-01

    Several problems of the spectral theory of linear relations in Banach spaces are considered. Linear differential inclusions in a Banach space are studied. The construction of the phase space and solutions is carried out with the help of the spectral theory of linear relations, ergodic theorems, and degenerate operator semigroups

  16. Removing Malmquist bias from linear regressions

    Science.gov (United States)

    Verter, Frances

    1993-01-01

    Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

  17. No-go theorem for passive single-rail linear optical quantum computing.

    Science.gov (United States)

    Wu, Lian-Ao; Walther, Philip; Lidar, Daniel A

    2013-01-01

    Photonic quantum systems are among the most promising architectures for quantum computers. It is well known that for dual-rail photons effective non-linearities and near-deterministic non-trivial two-qubit gates can be achieved via the measurement process and by introducing ancillary photons. While in principle this opens a legitimate path to scalable linear optical quantum computing, the technical requirements are still very challenging and thus other optical encodings are being actively investigated. One of the alternatives is to use single-rail encoded photons, where entangled states can be deterministically generated. Here we prove that even for such systems universal optical quantum computing using only passive optical elements such as beam splitters and phase shifters is not possible. This no-go theorem proves that photon bunching cannot be passively suppressed even when extra ancilla modes and arbitrary number of photons are used. Our result provides useful guidance for the design of optical quantum computers.

  18. Mass spectrometric determination of partial electron impact ionization cross sections of No, No2, and N2O from threshold up to 180 eV

    International Nuclear Information System (INIS)

    Kim, Y. B.

    1982-01-01

    Electron impact ionization of nitric oxide (NO), nitrogen dioxide (NO 2 ) and nitrous oxide (N 2 O) has been studied as a function of electron energy up to 180 eV with a double focussing mass spectrometer Varian MAT CH5 and an improved Nier type electron impact ion source. Relative partial ionization cross sections were measured for the processes NO + + 2e, NO ++ + 3e, and NO 2 + e -> NO + 2 + 2e, NO ++ + 3e and N 2 O + e -> N 2 O + + 2e. An accurate measurement of the cross section ratios q(NO 2+ /NO)/q(NO + /NO) and q(NO 2 2 /NO 2 )/q(NO + 2 /NO 2 ) has been made. Relative cross section functions were calibrated absolutely with two different normalization methods. Moreover, both metastable and collision induced dissociations of N 2 O + were studied quantitatively using the technique of decoupling the acceleration and deflection electric fields. Using the n- th root extrapolation the following ionization potentials have been derived from the cross section functions near threshold: NO + (X 1 Σ + ); NO ++ ; NO + 2 ; NO 2 ++ ; N 2 O + (X 2 π). These results are compared with previous measurements and theoretical calculations, where available. Part of the results presented have been already published in seven papers by the author. (Author)

  19. No-Impact Threshold Values for NRAP's Reduced Order Models

    Energy Technology Data Exchange (ETDEWEB)

    Last, George V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Murray, Christopher J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brown, Christopher F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jordan, Preston D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sharma, Maneesh [West Virginia Univ., and National Energy Technlogy Lab., Morgantown, WV (United States)

    2013-02-01

    The purpose of this study was to develop methodologies for establishing baseline datasets and statistical protocols for determining statistically significant changes between background concentrations and predicted concentrations that would be used to represent a contamination plume in the Gen II models being developed by NRAP’s Groundwater Protection team. The initial effort examined selected portions of two aquifer systems; the urban shallow-unconfined aquifer system of the Edwards-Trinity Aquifer System (being used to develop the ROM for carbon-rock aquifers, and the a portion of the High Plains Aquifer (an unconsolidated and semi-consolidated sand and gravel aquifer, being used to development the ROM for sandstone aquifers). Threshold values were determined for Cd, Pb, As, pH, and TDS that could be used to identify contamination due to predicted impacts from carbon sequestration storage reservoirs, based on recommendations found in the EPA’s ''Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities'' (US Environmental Protection Agency 2009). Results from this effort can be used to inform a ''no change'' scenario with respect to groundwater impacts, rather than the use of an MCL that could be significantly higher than existing concentrations in the aquifer.

  20. Basic concepts of control theory

    International Nuclear Information System (INIS)

    Markus, L.

    1976-01-01

    After a philosophical introduction on control theory and its position among various branches of science, mathematical control theory and its connection with functional analysis are discussed. A chapter on system theory concepts follows. After a summary of results and notations in the general theory of ordinary differential equations, a qualitative theory of control dynamical systems and chapters on the topological dynamics, and the controllability of linear systems are presented. As examples of autonomous linear systems, the switching locus for the synthesis of optimal controllers and linear dynamics with quadratic cost optimization are considered. (author)

  1. Effective-medium theory for nonlinear magneto-optics in magnetic granular alloys: cubic nonlinearity

    Energy Technology Data Exchange (ETDEWEB)

    Granovsky, Alexander B. E-mail: granov@magn.ru; Kuzmichov, Michail V.; Clerc, J.-P.; Inoue, Mitsuteru

    2003-03-01

    We propose a simple effective-medium approach for calculating the effective dielectric function of a magnetic metal-insulator granular alloy in which there is a weakly nonlinear relation between electric displacement D and electric field E for both constituent materials of the form D{sub i}={epsilon}{sub i}{sup (0)}E{sub i} +{chi}{sub i}{sup (3)}|E{sub i}|{sup 2}E{sub i}. We assume that linear {epsilon}{sub i}{sup (0)} and cubic nonlinear {chi}{sub i}{sup (3)} dielectric functions are diagonal and linear with magnetization non-diagonal components. For such metal-insulator composite magneto-optical effects depend on a light intensity and the effective cubic dielectric function {chi}{sub eff}{sup (3)} can be significantly greater (up to 10{sup 3} times) than that for constituent materials. The calculation scheme is based on the Bergman and Stroud-Hui theory of nonlinear optical properties of granular matter. The giant cubic magneto-optical nonlinearity is found for composites with metallic volume fraction close to the percolation threshold and at a resonance of optical conductivity. It is shown that a composite may exhibit nonlinear magneto-optics even when both constituent materials have no cubic magneto-optical nonlinearity.

  2. White Light Generation and Anisotropic Damage in Gold Films near Percolation Threshold

    DEFF Research Database (Denmark)

    Novikov, Sergey M.; Frydendahl, Christian; Beermann, Jonas

    2017-01-01

    in vanishingly small gaps between gold islands in thin films near the electrically determined percolation threshold. Optical explorations using two-photon luminescence (TPL) and near-field microscopies reveals supercubic TPL power dependencies with white-light spectra, establishing unequivocally...... that the strongest TPL signals are generated close to the percolation threshold films, and occurrence of extremely confined (similar to 30 nm) and strongly enhanced (similar to 100 times) fields at the illumination wavelength. For linearly polarized and sufficiently powerful light, we observe pronounced optical...

  3. Enriching an effect calculus with linear types

    DEFF Research Database (Denmark)

    Egger, Jeff; Møgelberg, Rasmus Ejlers; Simpson, Alex

    2009-01-01

    We define an ``enriched effect calculus'' by conservatively extending  a type theory for computational effects with primitives from linear logic. By doing so, we obtain a generalisation of linear type theory, intended as a formalism for expressing linear aspects of effects. As a worked example, we...... formulate  linearly-used continuations in the enriched effect calculus. These are captured by a fundamental translation of the enriched effect calculus into itself, which extends existing call-by-value and call-by-name linearly-used CPS translations. We show that our translation is involutive. Full...... completeness results for the various linearly-used CPS translations  follow. Our main results, the conservativity of enriching the effect calculus with linear primitives, and the involution property of the fundamental translation, are proved using a category-theoretic semantics for the enriched effect calculus...

  4. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  5. Linearizing W-algebras

    International Nuclear Information System (INIS)

    Krivonos, S.O.; Sorin, A.S.

    1994-06-01

    We show that the Zamolodchikov's and Polyakov-Bershadsky nonlinear algebras W 3 and W (2) 3 can be embedded as subalgebras into some linear algebras with finite set of currents. Using these linear algebras we find new field realizations of W (2) 3 and W 3 which could be a starting point for constructing new versions of W-string theories. We also reveal a number of hidden relationships between W 3 and W (2) 3 . We conjecture that similar linear algebras can exist for other W-algebra as well. (author). 10 refs

  6. The Effect of High-Frequency Stimulation on Sensory Thresholds in Chronic Pain Patients.

    Science.gov (United States)

    Youn, Youngwon; Smith, Heather; Morris, Brian; Argoff, Charles; Pilitsis, Julie G

    2015-01-01

    High-frequency stimulation (HFS) has recently gained attention as an alternative to parameters used in traditional spinal cord stimulation (SCS). Because HFS is paresthesia free, the gate theory of pain control as a basis of SCS has been called into question. The mechanism of action of HFS remains unclear. We compare the effects of HFS and traditional SCS on quantitative sensory testing parameters to provide insight into how HFS modulates the nervous system. Using quantitative sensory testing, we measured thermal detection and pain thresholds and mechanical detection and pressure pain thresholds, as well as vibratory detection, in 20 SCS patients off stimulation (OFF), on traditional stimulation (ON) and on HFS in a randomized order. HFS significantly increased the mechanical detection threshold compared to OFF stimulation (p < 0.001) and traditional SCS (p = 0.01). Pressure pain detection and vibratory detection thresholds also significantly increased with HFS compared to ON states (p = 0.04 and p = 0.01, respectively). In addition, HFS significantly decreased 10- and 40-gram pinprick detection compared to OFF states (both p = 0.01). No significant differences between OFF, ON and HFS states were seen in thermal and thermal pain detection. HFS is a new means of modulating chronic pain. The mechanism by which HFS works seems to differ from that of traditional SCS, offering a new platform for innovative advancements in treatment and a greater potential to treat patients by customizing waveforms. © 2015 S. Karger AG, Basel.

  7. Compartmentalization in environmental science and the perversion of multiple thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Burkart, W. [Institute of Radiation Hygiene of the Federal Office for Radiation Protection, Ingolstaedter Landstr. 1, D 85716 Oberschleissheim, Muenchen (Germany)

    2000-04-17

    Nature and living organisms are separated into compartments. The self-assembly of phospholipid micelles was as fundamental to the emergence of life and evolution as the formation of DNA precursors and their self-replication. Also, modern science owes much of its success to the study of single compartments, the dissection of complex structures and event chains into smaller study objects which can be manipulated with a set of more and more sophisticated equipment. However, in environmental science, these insights are obtained at a price: firstly, it is difficult to recognize, let alone to take into account what is lost during fragmentation and dissection; and secondly, artificial compartments such as scientific disciplines become self-sustaining, leading to new and unnecessary boundaries, subtly framing scientific culture and impeding progress in holistic understanding. The long-standing but fruitless quest to define dose-effect relationships and thresholds for single toxic agents in our environment is a central part of the problem. Debating single-agent toxicity in splendid isolation is deeply flawed in view of a modern world where people are exposed to low levels of a multitude of genotoxic and non-genotoxic agents. Its potential danger lies in the unwarranted postulation of separate thresholds for agents with similar action. A unifying concept involving toxicology and radiation biology is needed for a full mechanistic assessment of environmental health risks. The threat of synergism may be less than expected, but this may also hold for the safety margin commonly thought to be a consequence of linear no-threshold dose-effect relationship assumptions.

  8. Oscillation theory of linear differential equations

    Czech Academy of Sciences Publication Activity Database

    Došlý, Ondřej

    2000-01-01

    Roč. 36, č. 5 (2000), s. 329-343 ISSN 0044-8753 R&D Projects: GA ČR GA201/98/0677 Keywords : discrete oscillation theory %Sturm-Liouville equation%Riccati equation Subject RIV: BA - General Mathematics

  9. Calibration of the neutron scintillation counter threshold

    International Nuclear Information System (INIS)

    Noga, V.I.; Ranyuk, Yu.N.; Telegin, Yu.N.

    1978-01-01

    A method for calibrating the threshold of a neutron counter in the form of a 10x10x40 cm plastic scintillator is described. The method is based on the evaluation of the Compton boundary of γ-spectrum from the discrimination curve of counter loading. The results of calibration using 60 Co and 24 Na γ-sources are given. In order to eValuate the Compton edge rapidly, linear extrapolation of the linear part of the discrimination curve towards its intersection with the X axis is recommended. Special measurements have shown that the calibration results do not practically depend on the distance between the cathode of a photomultiplier and the place where collimated γ-radiation of the calibration source reaches the scintillator

  10. Linear theory of a cold relativistic beam in a strongly magnetized finite-geometry plasma

    International Nuclear Information System (INIS)

    Gagne, R.R.J.; Shoucri, M.M.

    1976-01-01

    The linear theory of a finite-geometry cold relativistic beam propagating in a cold homogeneous finite-geometry plasma, is investigated in the case of a strongly magnetized plasma. The beam is assumed to propagate parallel to the external magnetic field. It is shown that the instability which takes place at the Cherenkov resonance ωapprox. =k/subz/v/subb/ is of the convective type. The effect of the finite geometry on the instability growth rate is studied and is shown to decrease the growth rate, with respect to the infinite geometry, by a factor depending on the ratio of the beam-to-plasma radius

  11. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    International Nuclear Information System (INIS)

    Tait, E W; Payne, M C; Ratcliff, L E; Haynes, P D; Hine, N D M

    2016-01-01

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree with those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. Finally, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable. (paper)

  12. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  13. Optimal Policies for Random and Periodic Garbage Collections with Tenuring Threshold

    Science.gov (United States)

    Zhao, Xufeng; Nakamura, Syouji; Nakagawa, Toshio

    It is an important problem to determine the tenuring threshold to meet the pause time goal for a generational garbage collector. From such viewpoint, this paper proposes two stochastic models based on the working schemes of a generational garbage collector: One is random collection which occurs at a nonhomogeneous Poisson process and the other is periodic collection which occurs at periodic times. Since the cost suffered for minor collection increases, as the amount of surviving objects accumulates, tenuring minor collection should be made at some tenuring threshold. Using the techniques of cumulative processes and reliability theory, expected cost rates with tenuring threshold are obtained, and optimal policies which minimize them are discussed analytically and computed numerically.

  14. Molecular and vibrational structure of diphenylether and its 4,4' -dibromo derivative. Infrared linear dichroism spectroscopy and density functional theory calculations

    DEFF Research Database (Denmark)

    Eriksen, Troels K; Karlsen, Eva; Spanget-Larsen, Jens

    2015-01-01

    The title compounds were investigated by means of Linear Dichroism (LD) IR spectroscopy on samples partially aligned in uniaxially stretched low-density polyethylene and by density functional theory calculations. Satisfactory overall agreement between observed and calculated vibrational wavenumbers...

  15. Extending the precision and efficiency of the all-electron full-potential linearized augmented plane-wave density-functional theory method

    International Nuclear Information System (INIS)

    Michalicek, Gregor

    2015-01-01

    Density functional theory (DFT) is the most widely-used first-principles theory for analyzing, describing and predicting the properties of solids based on the fundamental laws of quantum mechanics. The success of the theory is a consequence of powerful approximations to the unknown exchange and correlation energy of the interacting electrons and of sophisticated electronic structure methods that enable the computation of the density functional equations on a computer. A widely used electronic structure method is the full-potential linearized augmented plane-wave (FLAPW) method, that is considered to be one of the most precise methods of its kind and often referred to as a standard. Challenged by the demand of treating chemically and structurally increasingly more complex solids, in this thesis this method is revisited and extended along two different directions: (i) precision and (ii) efficiency. In the full-potential linearized augmented plane-wave method the space of a solid is partitioned into nearly touching spheres, centered at each atom, and the remaining interstitial region between the spheres. The Kohn-Sham orbitals, which are used to construct the electron density, the essential quantity in DFT, are expanded into a linearized augmented plane-wave basis, which consists of plane waves in the interstitial region and angular momentum dependent radial functions in the spheres. In this thesis it is shown that for certain types of materials, e.g., materials with very broad electron bands or large band gaps, or materials that allow the usage of large space-filling spheres, the variational freedom of the basis in the spheres has to be extended in order to represent the Kohn-Sham orbitals with high precision over a large energy spread. Two kinds of additional radial functions confined to the spheres, so-called local orbitals, are evaluated and found to successfully eliminate this error. A new efficient basis set is developed, named linearized augmented lattice

  16. Evaluation of Wall Interference Effects in a Two-Dimensional Transonic Wind Tunnel by Subsonic Linear Theory,

    Science.gov (United States)

    1979-02-01

    tests were conducted on two geometrica lly similar models of each of two aerofoil sections -—t he NA CA 00/ 2 and the BGK- 1 sections -and covered a...and slotted-wall tes t sections are corrected for wind tunnel wall interference efJ~cts by the application of classical linearized theory. For the...solid wall results , these corrections appear to produce data which are very close to being free of the effects of interference. In the case of

  17. Linear and nonlinear instability in vertical counter-current laminar gas-liquid flows

    Science.gov (United States)

    Schmidt, Patrick; Ó Náraigh, Lennon; Lucquiaud, Mathieu; Valluri, Prashant

    2016-04-01

    We consider the genesis and dynamics of interfacial instability in vertical gas-liquid flows, using as a model the two-dimensional channel flow of a thin falling film sheared by counter-current gas. The methodology is linear stability theory (Orr-Sommerfeld analysis) together with direct numerical simulation of the two-phase flow in the case of nonlinear disturbances. We investigate the influence of two main flow parameters on the interfacial dynamics, namely the film thickness and pressure drop applied to drive the gas stream. To make contact with existing studies in the literature, the effect of various density contrasts is also examined. Energy budget analyses based on the Orr-Sommerfeld theory reveal various coexisting unstable modes (interfacial, shear, internal) in the case of high density contrasts, which results in mode coalescence and mode competition, but only one dynamically relevant unstable interfacial mode for low density contrast. A study of absolute and convective instability for low density contrast shows that the system is absolutely unstable for all but two narrow regions of the investigated parameter space. Direct numerical simulations of the same system (low density contrast) show that linear theory holds up remarkably well upon the onset of large-amplitude waves as well as the existence of weakly nonlinear waves. For high density contrasts, corresponding more closely to an air-water-type system, linear stability theory is also successful at determining the most-dominant features in the interfacial wave dynamics at early-to-intermediate times. Nevertheless, the short waves selected by the linear theory undergo secondary instability and the wave train is no longer regular but rather exhibits chaotic motion. The same linear stability theory predicts when the direction of travel of the waves changes — from downwards to upwards. We outline the practical implications of this change in terms of loading and flooding. The change in direction of the

  18. Linear and nonlinear instability in vertical counter-current laminar gas-liquid flows

    International Nuclear Information System (INIS)

    Schmidt, Patrick; Lucquiaud, Mathieu; Valluri, Prashant; Ó Náraigh, Lennon

    2016-01-01

    We consider the genesis and dynamics of interfacial instability in vertical gas-liquid flows, using as a model the two-dimensional channel flow of a thin falling film sheared by counter-current gas. The methodology is linear stability theory (Orr-Sommerfeld analysis) together with direct numerical simulation of the two-phase flow in the case of nonlinear disturbances. We investigate the influence of two main flow parameters on the interfacial dynamics, namely the film thickness and pressure drop applied to drive the gas stream. To make contact with existing studies in the literature, the effect of various density contrasts is also examined. Energy budget analyses based on the Orr-Sommerfeld theory reveal various coexisting unstable modes (interfacial, shear, internal) in the case of high density contrasts, which results in mode coalescence and mode competition, but only one dynamically relevant unstable interfacial mode for low density contrast. A study of absolute and convective instability for low density contrast shows that the system is absolutely unstable for all but two narrow regions of the investigated parameter space. Direct numerical simulations of the same system (low density contrast) show that linear theory holds up remarkably well upon the onset of large-amplitude waves as well as the existence of weakly nonlinear waves. For high density contrasts, corresponding more closely to an air-water-type system, linear stability theory is also successful at determining the most-dominant features in the interfacial wave dynamics at early-to-intermediate times. Nevertheless, the short waves selected by the linear theory undergo secondary instability and the wave train is no longer regular but rather exhibits chaotic motion. The same linear stability theory predicts when the direction of travel of the waves changes — from downwards to upwards. We outline the practical implications of this change in terms of loading and flooding. The change in direction of the

  19. Threshold resummation for Higgs production in effective field theory

    International Nuclear Information System (INIS)

    Idilbi, Ahmad; Ji Xiangdong; Ma Jianping; Yuan Feng

    2006-01-01

    We present an effective field theory approach to resum the large double logarithms originated from soft-gluon radiations at small final-state hadron invariant masses in Higgs and vector boson (γ*,W,Z) production at hadron colliders. The approach is conceptually simple, independent of details of an effective field theory formulation, and valid to all orders in subleading logarithms. As an example, we show the result of summing the next-to-next-to-next-to leading logarithms is identical to that of the standard pQCD factorization method

  20. Bound-Electron Nonlinearity Beyond the Ionization Threshold

    Science.gov (United States)

    Wahlstrand, J. K.; Zahedpour, S.; Bahl, A.; Kolesik, M.; Milchberg, H. M.

    2018-05-01

    We present absolute space- and time-resolved measurements of the ultrafast laser-driven nonlinear polarizability in argon, krypton, xenon, nitrogen, and oxygen up to ionization fractions of a few percent. These measurements enable determination of the strongly nonperturbative bound-electron nonlinear polarizability well beyond the ionization threshold, where it is found to remain approximately quadratic in the laser field, a result normally expected at much lower intensities where perturbation theory applies.

  1. Is the local linearity of space-time inherited from the linearity of probabilities?

    Science.gov (United States)

    Müller, Markus P.; Carrozza, Sylvain; Höhn, Philipp A.

    2017-02-01

    The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics.

  2. Is the local linearity of space-time inherited from the linearity of probabilities?

    International Nuclear Information System (INIS)

    Müller, Markus P; Carrozza, Sylvain; Höhn, Philipp A

    2017-01-01

    The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics. (paper)

  3. Influence of arousal threshold and depth of sleep on respiratory stability in man: analysis using a mathematical model.

    Science.gov (United States)

    Longobardo, G S; Evangelisti, C J; Cherniack, N S

    2009-12-01

    We examined the effect of arousals (shifts from sleep to wakefulness) on breathing during sleep using a mathematical model. The model consisted of a description of the fluid dynamics and mechanical properties of the upper airways and lungs, as well as a controller sensitive to arterial and brain changes in CO(2), changes in arterial oxygen, and a neural input, alertness. The body was divided into multiple gas store compartments connected by the circulation. Cardiac output was constant, and cerebral blood flows were sensitive to changes in O(2) and CO(2) levels. Arousal was considered to occur instantaneously when afferent respiratory chemical and neural stimulation reached a threshold value, while sleep occurred when stimulation fell below that value. In the case of rigid and nearly incompressible upper airways, lowering arousal threshold decreased the stability of breathing and led to the occurrence of repeated apnoeas. In more compressible upper airways, to maintain stability, increasing arousal thresholds and decreasing elasticity were linked approximately linearly, until at low elastances arousal thresholds had no effect on stability. Increased controller gain promoted instability. The architecture of apnoeas during unstable sleep changed with the arousal threshold and decreases in elasticity. With rigid airways, apnoeas were central. With lower elastances, apnoeas were mixed even with higher arousal thresholds. With very low elastances and still higher arousal thresholds, sleep consisted totally of obstructed apnoeas. Cycle lengths shortened as the sleep architecture changed from mixed apnoeas to total obstruction. Deeper sleep also tended to promote instability by increasing plant gain. These instabilities could be countered by arousal threshold increases which were tied to deeper sleep or accumulated aroused time, or by decreased controller gains.

  4. Strings and superstrings. Electron linear colliders

    International Nuclear Information System (INIS)

    Alessandrini, V.; Bambade, P.; Binetruy, P.; Kounnas, C.; Le Duff, J.; Schwimmer, A.

    1989-01-01

    Basic string theory; strings in interaction; construction of strings and superstrings in arbitrary space-time dimensions; compactification and phenomenology; linear e+e- colliders; and the Stanford linear collider were discussed [fr

  5. The Schrödinger representation and its relation to the holomorphic representation in linear and affine field theory

    International Nuclear Information System (INIS)

    Oeckl, Robert

    2012-01-01

    We establish a precise isomorphism between the Schrödinger representation and the holomorphic representation in linear and affine field theory. In the linear case, this isomorphism is induced by a one-to-one correspondence between complex structures and Schrödinger vacua. In the affine case we obtain similar results, with the role of the vacuum now taken by a whole family of coherent states. In order to establish these results we exhibit a rigorous construction of the Schrödinger representation and use a suitable generalization of the Segal-Bargmann transform. Our construction is based on geometric quantization and applies to any real polarization and its pairing with any Kähler polarization.

  6. Effects of epidemic threshold definition on disease spread statistics

    Science.gov (United States)

    Lagorio, C.; Migueles, M. V.; Braunstein, L. A.; López, E.; Macri, P. A.

    2009-03-01

    We study the statistical properties of SIR epidemics in random networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size sc. Using percolation theory to calculate the average fractional size of an epidemic, we find that the strength of the spanning link percolation cluster P∞ is an upper bound to . For small values of sc, P∞ is no longer a good approximation, and the average fractional size has to be computed directly. We find that the choice of sc is generally (but not always) guided by the network structure and the value of T of the disease in question. If the goal is to always obtain P∞ as the average epidemic size, one should choose sc to be the typical size of the largest percolation cluster at the critical percolation threshold for the transmissibility. We also study Q, the probability that an SIR propagation reaches the epidemic mass sc, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice sc on predictions of average outcome sizes of computer failure epidemics.

  7. Evapotranspiration patterns in complex upland forests reveal contrasting topographic thresholds of non-linearity

    Science.gov (United States)

    Metzen, D.; Sheridan, G. J.; Benyon, R. G.; Bolstad, P. V.; Nyman, P.; Lane, P. N. J.

    2017-12-01

    Large areas of forest are often treated as being homogeneous just because they fall in a single climate category. However, we observe strong vegetation patterns in relation to topography in SE Australian forests and thus hypothesise that ET will vary spatially as well. Spatial heterogeneity evolves over different temporal scales in response to climatic forcing with increasing time lag from soil moisture (sub-yearly), to vegetation (10s -100s of years) to soil properties and topography (>100s of years). Most importantly, these processes and time scales are not independent, creating feedbacks that result in "co-evolved stable states" which yield the current spatial terrain, vegetation and ET patterns. We used up-scaled sap flux and understory ET measurements from water-balance plots, as well as LiDAR derived terrain and vegetation information, to infer links between spatio-temporal energy and water fluxes, topography and vegetation patterns at small catchment scale. Topography caused variations in aridity index between polar and equatorial-facing slopes (1.3 vs 1.8), which in turn manifested in significant differences in sapwood area index (6.9 vs 5.8), overstory LAI (3.0 vs 2.3), understory LAI (0.5 vs 0.4), sub-canopy radiation load (4.6 vs 6.8 MJ m-2 d-1), overstory transpiration (501 vs 347 mm a-1) and understory ET (79 vs 155 mm a-1). Large spatial variation in overstory transpiration (195 to 891 mm a-1) was observed over very short distances (100s m); a range representative of diverse forests such as arid open woodlands and wet mountain ash forests. Contrasting, non-linear overstory and understory ET patterns were unveiled between aspects, and topographic thresholds were lower for overstory than understory ET. While ET partitioning remained stable on polar-facing slopes regardless of slope position, overstory contribution gradually decreased with increasing slope inclination on equatorial aspects. Further, we show that ET patterns and controls underlie strong

  8. Linear positivity and virtual probability

    International Nuclear Information System (INIS)

    Hartle, James B.

    2004-01-01

    We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics

  9. Non-linear leak currents affect mammalian neuron physiology

    Directory of Open Access Journals (Sweden)

    Shiwei eHuang

    2015-11-01

    Full Text Available In their seminal works on squid giant axons, Hodgkin and Huxley approximated the membrane leak current as Ohmic, i.e. linear, since in their preparation, sub-threshold current rectification due to the influence of ionic concentration is negligible. Most studies on mammalian neurons have made the same, largely untested, assumption. Here we show that the membrane time constant and input resistance of mammalian neurons (when other major voltage-sensitive and ligand-gated ionic currents are discounted varies non-linearly with membrane voltage, following the prediction of a Goldman-Hodgkin-Katz-based passive membrane model. The model predicts that under such conditions, the time constant/input resistance-voltage relationship will linearize if the concentration differences across the cell membrane are reduced. These properties were observed in patch-clamp recordings of cerebellar Purkinje neurons (in the presence of pharmacological blockers of other background ionic currents and were more prominent in the sub-threshold region of the membrane potential. Model simulations showed that the non-linear leak affects voltage-clamp recordings and reduces temporal summation of excitatory synaptic input. Together, our results demonstrate the importance of trans-membrane ionic concentration in defining the functional properties of the passive membrane in mammalian neurons as well as other excitable cells.

  10. Linear algebra

    CERN Document Server

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  11. Comparing Consider-Covariance Analysis with Sigma-Point Consider Filter and Linear-Theory Consider Filter Formulations

    Science.gov (United States)

    Lisano, Michael E.

    2007-01-01

    Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to

  12. e - 2e Collisions near ionization threshold - electron correlations

    International Nuclear Information System (INIS)

    Mazeau, J.; Huetz, A.; Selles, P.

    1986-01-01

    The results presented in this report constitute the first direct experimental proof that a few (LSΠ) states definitely contribute to the near threshold ionization cross section. The Wannier Peterkop Rau theory is an useful tool to their understanding and a more precise determination of the angular correlation width is still needed. It has been shown that the values of the a LSΠ coefficients can be extracted from the observations. These are physically interesting quantities as they are directly related to the probability of forming Wannier ridge riding states above the double escape threshold, and considerable theoretical effort is presently in progress to investigate such states. (Auth.)

  13. A Robust Threshold for Iterative Channel Estimation in OFDM Systems

    Directory of Open Access Journals (Sweden)

    A. Kalaycioglu

    2010-04-01

    Full Text Available A novel threshold computation method for pilot symbol assisted iterative channel estimation in OFDM systems is considered. As the bits are transmitted in packets, the proposed technique is based on calculating a particular threshold for each data packet in order to select the reliable decoder output symbols to improve the channel estimation performance. Iteratively, additional pilot symbols are established according to the threshold and the channel is re-estimated with the new pilots inserted to the known channel estimation pilot set. The proposed threshold calculation method for selecting additional pilots performs better than non-iterative channel estimation, no threshold and fixed threshold techniques in poor HF channel simulations.

  14. Matrices and linear transformations

    CERN Document Server

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  15. Coherent versus Measurement Feedback: Linear Systems Theory for Quantum Information

    Directory of Open Access Journals (Sweden)

    Naoki Yamamoto

    2014-11-01

    Full Text Available To control a quantum system via feedback, we generally have two options in choosing a control scheme. One is the coherent feedback, which feeds the output field of the system, through a fully quantum device, back to manipulate the system without involving any measurement process. The other one is measurement-based feedback, which measures the output field and performs a real-time manipulation on the system based on the measurement results. Both schemes have advantages and disadvantages, depending on the system and the control goal; hence, their comparison in several situations is important. This paper considers a general open linear quantum system with the following specific control goals: backaction evasion, generation of a quantum nondemolished variable, and generation of a decoherence-free subsystem, all of which have important roles in quantum information science. Some no-go theorems are proven, clarifying that those goals cannot be achieved by any measurement-based feedback control. On the other hand, it is shown that, for each control goal there exists a coherent feedback controller accomplishing the task. The key idea to obtain all the results is system theoretic characterizations of the above three notions in terms of controllability and observability properties or transfer functions of linear systems, which are consistent with their standard definitions.

  16. Formal scattering theory approach to S-matrix relations in supersymmetric quantum mechanics

    International Nuclear Information System (INIS)

    Amado, R.D.; Cannata, F.; Dedonder, J.P.

    1988-01-01

    Combining the methods of scattering theory and supersymmetric quantum mechanics we obtain relations between the S matrix and its supersymmetric partner. These relations involve only asymptotic quantities and do not require knowledge of the dynamical details. For example, for coupled channels with no threshold differences the relations involve the asymptotic normalization constant of the bound state removed by supersymmetry

  17. Thresholds of Toxicological Concern - Setting a threshold for testing below which there is little concern.

    Science.gov (United States)

    Hartung, Thomas

    2017-01-01

    Low dose, low risk; very low dose, no real risk. Setting a pragmatic threshold below which concerns become negligible is the purpose of thresholds of toxicological concern (TTC). The idea is that such threshold values do not need to be established for each and every chemical based on experimental data, but that by analyzing the distribution of lowest or no-effect doses of many chemicals, a TTC can be defined - typically using the 5th percentile of this distribution and lowering it by an uncertainty factor of, e.g., 100. In doing so, TTC aims to compare exposure information (dose) with a threshold below which any hazard manifestation is very unlikely to occur. The history and current developments of this concept are reviewed and the application of TTC for different regulated products and their hazards is discussed. TTC lends itself as a pragmatic filter to deprioritize testing needs whenever real-life exposures are much lower than levels where hazard manifestation would be expected, a situation that is called "negligible exposure" in the REACH legislation, though the TTC concept has not been fully incorporated in its implementation (yet). Other areas and regulations - especially in the food sector and for pharmaceutical impurities - are more proactive. Large, curated databases on toxic effects of chemicals provide us with the opportunity to set TTC for many hazards and substance classes and thus offer a precautionary second tier for risk assessments if hazard cannot be excluded. This allows focusing testing efforts better on relevant exposures to chemicals.

  18. Periodic feedback stabilization for linear periodic evolution equations

    CERN Document Server

    Wang, Gengsheng

    2016-01-01

    This book introduces a number of recent advances regarding periodic feedback stabilization for linear and time periodic evolution equations. First, it presents selected connections between linear quadratic optimal control theory and feedback stabilization theory for linear periodic evolution equations. Secondly, it identifies several criteria for the periodic feedback stabilization from the perspective of geometry, algebra and analyses respectively. Next, it describes several ways to design periodic feedback laws. Lastly, the book introduces readers to key methods for designing the control machines. Given its coverage and scope, it offers a helpful guide for graduate students and researchers in the areas of control theory and applied mathematics.

  19. Applied linear algebra and matrix analysis

    CERN Document Server

    Shores, Thomas S

    2018-01-01

    In its second edition, this textbook offers a fresh approach to matrix and linear algebra. Its blend of theory, computational exercises, and analytical writing projects is designed to highlight the interplay between these aspects of an application. This approach places special emphasis on linear algebra as an experimental science that provides tools for solving concrete problems. The second edition’s revised text discusses applications of linear algebra like graph theory and network modeling methods used in Google’s PageRank algorithm. Other new materials include modeling examples of diffusive processes, linear programming, image processing, digital signal processing, and Fourier analysis. These topics are woven into the core material of Gaussian elimination and other matrix operations; eigenvalues, eigenvectors, and discrete dynamical systems; and the geometrical aspects of vector spaces. Intended for a one-semester undergraduate course without a strict calculus prerequisite, Applied Linear Algebra and M...

  20. Electron-atom spin asymmetry and two-electron photodetachment - Addenda to the Coulomb-dipole threshold law

    Science.gov (United States)

    Temkin, A.

    1984-01-01

    Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).

  1. Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum

    Science.gov (United States)

    Rips, Ilya

    2017-01-01

    Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990), 10.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988), 10.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ /ωbrate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.

  2. Axiomatic field theory and quantum electrodynamics: the massive case. [Gauge invariance, Maxwell equations, high momentum behavior

    Energy Technology Data Exchange (ETDEWEB)

    Steinmann, O [Bielefeld Univ. (F.R. Germany). Fakultaet fuer Physik

    1975-01-01

    Massive quantum electrodynamics of the electron is formulated as an LSZ theory of the electromagnetic field F(..mu nu..) and the electron-positron fields PSI. The interaction is introduced with the help of mathematically well defined subsidiary conditions. These are: 1) gauge invariance of the first kind, assumed to be generated by a conserved current j(..mu..); 2) the homogeneous Maxwell equations and a massive version of the inhomogeneous Maxwell equations; 3) a minimality condition concerning the high momentum behaviour of the theory. The inhomogeneous Maxwell equation is a linear differential equation connecting Fsub(..mu nu..) with the current Jsub(..mu..). No Lagrangian, no non-linear field equations, and no explicit expression of Jsub(..mu..) in terms of PSI, anti-PSI are needed. It is shown in perturbation theory that the proposed conditions fix the physically relevant (i.e. observable) quantities of the theory uniquely.

  3. Threshold responses of Amazonian stream fishes to timing and extent of deforestation.

    Science.gov (United States)

    Brejão, Gabriel L; Hoeinghaus, David J; Pérez-Mayorga, María Angélica; Ferraz, Silvio F B; Casatti, Lilian

    2017-12-06

    Deforestation is a primary driver of biodiversity change through habitat loss and fragmentation. Stream biodiversity may not respond to deforestation in a simple linear relationship. Rather, threshold responses to extent and timing of deforestation may occur. Identification of critical deforestation thresholds is needed for effective conservation and management. We tested for threshold responses of fish species and functional groups to degree of watershed and riparian zone deforestation and time since impact in 75 streams in the western Brazilian Amazon. We used remote sensing to assess deforestation from 1984 to 2011. Fish assemblages were sampled with seines and dip nets in a standardized manner. Fish species (n = 84) were classified into 20 functional groups based on ecomorphological traits associated with habitat use, feeding, and locomotion. Threshold responses were quantified using threshold indicator taxa analysis. Negative threshold responses to deforestation were common and consistently occurred at very low levels of deforestation (70% deforestation and >10 years after impact. Findings were similar at the community level for both taxonomic and functional analyses. Because most negative threshold responses occurred at low levels of deforestation and soon after impact, even minimal change is expected to negatively affect biodiversity. Delayed positive threshold responses to extreme deforestation by a few species do not offset the loss of sensitive taxa and likely contribute to biotic homogenization. © 2017 Society for Conservation Biology.

  4. Linear gate with prescaled window

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J; Bissem, H H; Krause, H; Scobel, W [Hamburg Univ. (Germany, F.R.). 1. Inst. fuer Experimentalphysik

    1978-07-15

    An electronic circuit is described that combines the features of a linear gate, a single channel analyzer and a prescaler. It allows selection of a pulse height region between two adjustable thresholds and scales the intensity of the spectrum within this window down by a factor 2sup(N) (0<=N<=9), whereas the complementary part of the spectrum is transmitted without being affected.

  5. Non-linearity consideration when analyzing reactor noise statistical characteristics. [BWR

    Energy Technology Data Exchange (ETDEWEB)

    Kebadze, B V; Adamovski, L A

    1975-06-01

    Statistical characteristics of boiling water reactor noise in the vicinity of stability threshold are studied. The reactor is considered as a non-linear system affected by random perturbations. To solve a non-linear problem the principle of statistical linearization is used. It is shown that the halfwidth of resonance peak in neutron power noise spectrum density as well as the reciprocal of noise dispersion, which are used in predicting a stable operation theshold, are different from zero both within and beyond the stability boundary the determination of which was based on linear criteria.

  6. Causal role of prefrontal cortex in the threshold for access to consciousness.

    Science.gov (United States)

    Del Cul, A; Dehaene, S; Reyes, P; Bravo, E; Slachevsky, A

    2009-09-01

    What neural mechanisms support our conscious perception of briefly presented stimuli? Some theories of conscious access postulate a key role of top-down amplification loops involving prefrontal cortex (PFC). To test this issue, we measured the visual backward masking threshold in patients with focal prefrontal lesions, using both objective and subjective measures while controlling for putative attention deficits. In all conditions of temporal or spatial attention cueing, the threshold for access to consciousness was systematically shifted in patients, particular after a lesion of the left anterior PFC. The deficit affected subjective reports more than objective performance, and objective performance conditioned on subjective visibility was essentially normal. We conclude that PFC makes a causal contribution to conscious visual perception of masked stimuli, and outline a dual-route signal detection theory of objective and subjective decision making.

  7. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  8. Non-linear absorption for concentrated solar energy transport

    Energy Technology Data Exchange (ETDEWEB)

    Jaramillo, O. A; Del Rio, J.A; Huelsz, G [Centro de Investigacion de Energia, UNAM, Temixco, Morelos (Mexico)

    2000-07-01

    In order to determine the maximum solar energy that can be transported using SiO{sub 2} optical fibers, analysis of non-linear absorption is required. In this work, we model the interaction between solar radiation and the SiO{sub 2} optical fiber core to determine the dependence of the absorption of the radioactive intensity. Using Maxwell's equations we obtain the relation between the refractive index and the electric susceptibility up to second order in terms of the electric field intensity. This is not enough to obtain an explicit expression for the non-linear absorption. Thus, to obtain the non-linear optical response, we develop a microscopic model of an harmonic driven oscillators with damp ing, based on the Drude-Lorentz theory. We solve this model using experimental information for the SiO{sub 2} optical fiber, and we determine the frequency-dependence of the non-linear absorption and the non-linear extinction of SiO{sub 2} optical fibers. Our results estimate that the average value over the solar spectrum for the non-linear extinction coefficient for SiO{sub 2} is k{sub 2}=10{sup -}29m{sup 2}V{sup -}2. With this result we conclude that the non-linear part of the absorption coefficient of SiO{sub 2} optical fibers during the transport of concentrated solar energy achieved by a circular concentrator is negligible, and therefore the use of optical fibers for solar applications is an actual option. [Spanish] Con el objeto de determinar la maxima energia solar que puede transportarse usando fibras opticas de SiO{sub 2} se requiere el analisis de absorcion no linear. En este trabajo modelamos la interaccion entre la radiacion solar y el nucleo de la fibra optica de SiO{sub 2} para determinar la dependencia de la absorcion de la intensidad radioactiva. Mediante el uso de las ecuaciones de Maxwell obtenemos la relacion entre el indice de refraccion y la susceptibilidad electrica hasta el segundo orden en terminos de intensidad del campo electrico. Esto no es

  9. Threshold double photoionization of atoms with synchrotron radiation

    International Nuclear Information System (INIS)

    Armen, G.B.

    1985-01-01

    In this dissertation, probabilities of M-shell excitation accompanying K-shell photoionization in argon are examined from both an experimental and theoretical standpoint. In the limit of high excitation energy, the conventional sudden approximation is applied to the problem. Threshold behavior of these probabilities is examined in the central-field dipole approximation, which is seen to reduce to the sudden approximation at larger excitation energies. Auger satellites were measured to determine these double-excitation probabilities as a function of incident photon energy. The theoretically predicted difference between the dependence of shake-up and shake-off probabilities on the photon energy near threshold is demonstrated. The present theory is seen to provide adequate predictions for shake-up probabilities, but to underestimate shake-off

  10. Using linear time-invariant system theory to estimate kinetic parameters directly from projection measurements

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1995-01-01

    It is common practice to estimate kinetic parameters from dynamically acquired tomographic data by first reconstructing a dynamic sequence of three-dimensional reconstructions and then fitting the parameters to time activity curves generated from the time-varying reconstructed images. However, in SPECT, the pharmaceutical distribution can change during the acquisition of a complete tomographic data set, which can bias the estimated kinetic parameters. It is hypothesized that more accurate estimates of the kinetic parameters can be obtained by fitting to the projection measurements instead of the reconstructed time sequence. Estimation from projections requires the knowledge of their relationship between the tissue regions of interest or voxels with particular kinetic parameters and the project measurements, which results in a complicated nonlinear estimation problem with a series of exponential factors with multiplicative coefficients. A technique is presented in this paper where the exponential decay parameters are estimated separately using linear time-invariant system theory. Once the exponential factors are known, the coefficients of the exponentials can be estimated using linear estimation techniques. Computer simulations demonstrate that estimation of the kinetic parameters directly from the projections is more accurate than the estimation from the reconstructed images

  11. Consistent deformations of dual formulations of linearized gravity: A no-go result

    International Nuclear Information System (INIS)

    Bekaert, Xavier; Boulanger, Nicolas; Henneaux, Marc

    2003-01-01

    The consistent, local, smooth deformations of the dual formulation of linearized gravity involving a tensor field in the exotic representation of the Lorentz group with Young symmetry type (D-3,1) (one column of length D-3 and one column of length 1) are systematically investigated. The rigidity of the Abelian gauge algebra is first established. We next prove a no-go theorem for interactions involving at most two derivatives of the fields

  12. Analysis of e-e angular correlations in near-threshold electron impact ionisation of helium

    International Nuclear Information System (INIS)

    Selles, P.; Huetz, A.; Mazeau, J.

    1987-01-01

    Using a coincidence technique in a coplanar geometry, triple differential cross sections (TDCS) for electron impact ionisation of helium are measured in the 0.5-2 eV energy range above threshold. As a few states (O <= L <= 2) of the two outgoing electrons are obviously involved in the process, their respective intensities appear as unknown parameters in the theoretical TDCS as deduced in the frame of the Wannier theory. The authors show that almost all these parameters can be determined through normalisation to the measured TDCS in two specific geometries: in the first one the two electrons are kept in opposite directions while in the second one they remain symmetrical with respect to the incident beam. A comparison with the complete set of data is then performed. The measured TDCS are in agreement with the Wannier theory for the lowest energies (0.5 and 1 eV). At 2 eV the overall agreement becomes poorer, although some predictions of the Wannier theory still apply. Finally specific measurements at 8 eV clearly show from consideration of symmetry that the Wannier theory no longer applies at this energy. (author)

  13. Above threshold ionization of atomic hydrogen in ns states with up to four excess photons

    Energy Technology Data Exchange (ETDEWEB)

    Karule, E [Institute of Physics and Spectroscopy, University of Latvia, Raina blvd. 19, Riga, LV-1586 (Latvia); Gailitis, A, E-mail: karule@latnet.l [Institute of Physics, University of Latvia, Salaspils-1, LV-2169 (Latvia)

    2010-03-28

    In a high-intensity laser field an atom can absorb more photons than the minimum necessary for ionization. It is known as above threshold ionization (ATI). Theoretically it is the most difficult case to handle as we have to consider transitions in continuum. To study ATI we use the perturbation theory and Green's function formalism. We have derived the modified two-term Coulomb Green's function (CGF) Sturmian expansion. In each term explicit summation over all intermediate states is carried out. The transition amplitude may be obtained in a closed form. The generalized cross sections are evaluated for the photoionization of atomic hydrogen in ns states with up to four excess photons. Calculations are performed in a wide range of wavelengths for linear and circular polarization. In the cases for which data are available, our results agree very well with the previous ones.

  14. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Science.gov (United States)

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  15. Non-linear theory of elasticity

    CERN Document Server

    Lurie, AI

    2012-01-01

    This book examines in detail the Theory of Elasticity which is a branch of the mechanics of a deformable solid. Special emphasis is placed on the investigation of the process of deformation within the framework of the generally accepted model of a medium which, in this case, is an elastic body. A comprehensive list of Appendices is included providing a wealth of references for more in depth coverage. The work will provide both a stimulus for future research in this field as well as useful reference material for many years to come.

  16. Signals and transforms in linear systems analysis

    CERN Document Server

    Wasylkiwskyj, Wasyl

    2013-01-01

    Signals and Transforms in Linear Systems Analysis covers the subject of signals and transforms, particularly in the context of linear systems theory. Chapter 2 provides the theoretical background for the remainder of the text. Chapter 3 treats Fourier series and integrals. Particular attention is paid to convergence properties at step discontinuities. This includes the Gibbs phenomenon and its amelioration via the Fejer summation techniques. Special topics include modulation and analytic signal representation, Fourier transforms and analytic function theory, time-frequency analysis and frequency dispersion. Fundamentals of linear system theory for LTI analogue systems, with a brief account of time-varying systems, are covered in Chapter 4 . Discrete systems are covered in Chapters 6 and 7.  The Laplace transform treatment in Chapter 5 relies heavily on analytic function theory as does Chapter 8 on Z -transforms. The necessary background on complex variables is provided in Appendix A. This book is intended to...

  17. Linear dose-response of acentric chromosome fragments down to 1 R of x-rays in grasshopper neuroblasts, a potential mutagen-test system

    International Nuclear Information System (INIS)

    Gaulden, M.E.; Read, C.B.

    1978-01-01

    Grasshopper-embryo neuroblasts have no spontaneous chromosome breakage; therefore they permit easy detection of agents that break chromosomes. An X-ray exposure of 1 R induces in them a detectable number of chromosome fragments. The dose-response of acentric fragment frequency fits a linear model between 0 and 128 R. Thus another cell type is added to those previously demonstrated to have no threshold dose for the induction of chromosome or gene mutations

  18. The semiclassical S-matrix theory of three body Coulomb break-up

    International Nuclear Information System (INIS)

    Chocian, P.

    1999-01-01

    Using semiclassical methods we investigate the threshold behaviour for 3-particle break-up of a system with one particle of charge Z and two other particles of charge -q. For the particular case where the ratio of the charges of the third particle to the wing particles is Z/q = 1/4, the Wannier exponent for break-up diverges and it is found that the threshold law changes from a power law to an exponential law of the form exp(-λ/√E) which is in agreement with other results. Wannier's threshold theory is extended analytically to above threshold energies and it is found that the classical law for the divergent case is identical to an analytical result from the quantal hidden crossing theory. Corrections to the threshold behaviour for hydrogen from the above-threshold derivation are compared with those predicted by a calculation from hidden crossing theory. Excellent agreement is found which confirms the success of our classical derivation. The threshold behaviour is tested using semiclassical S-matrix theory above the region of divergence and it is found that for Z/q - of the initial states in S-matrix theory translates to a uniform distribution of outgoing trajectories on the boundary of the reaction zone. Observations of classical trajectories suggest that the radius of the reaction zone (R b ) is dependent on the total energy of the system. R b is determined numerically from ionization trajectories. When the dependence on R b is included in half-collision calculations, cross sections are produced which are in excellent agreement with full-collision S-matrix results for all values of Z > 0.25. (author)

  19. Superstring threshold corrections to Yukawa couplings

    International Nuclear Information System (INIS)

    Antoniadis, I.; Taylor, T.R.

    1992-12-01

    A general method of computing string corrections to the Kaehler metric and Yukawa couplings is developed at the one-loop level for a general compactification of the heterotic superstring theory. It also provides a direct determination of the so-called Green-Schwarz term. The matter metric has an infrared divergent part which reproduces the field-theoretical anomalous dimensions, and a moduli-dependent part which gives rise to threshold corrections in the physical Yukawa couplings. Explicit expressions are derived for symmetric orbifold compactifications. (author). 20 refs

  20. Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.

    Science.gov (United States)

    Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew

    2018-01-01

    Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.