WorldWideScience

Sample records for linear threshold nextgeneration

  1. Thresholding projection estimators in functional linear models

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  2. Prospects for next-generation e+e- linear colliders

    Ruth, R.D.

    1990-02-01

    The purpose of this paper is to review progress in the US towards a next generation linear collider. During 1988, there were three workshops held on linear colliders: ''Physics of Linear Colliders,'' in Capri, Italy, June 14--18, 1988; Snowmass 88 (Linear Collider subsection) June 27--July 15, 1988; and SLAC International Workshop on Next Generation Linear Colliders, November 28--December 9, 1988. In this paper, I focus on reviewing the issues and progress on a next generation linear collider. The energy range is dictated by physics with a mass reach well beyond LEP, although somewhat short of SSC. The luminosity is that required to obtain 10 3 --10 4 units of R 0 per year. The length is consistent with a site on Stanford land with collision occurring on the SLAC site; the power was determined by economic considerations. Finally, the technology as limited by the desire to have a next generation linear collider by the next century. 37 refs., 3 figs., 6 tabs

  3. Proceedings of the international workshop on next-generation linear colliders

    Riordan, M.

    1988-12-01

    This report contains papers on the next-generation of linear colliders. The particular areas of discussion are: parameters; beam dynamics and wakefields; damping rings and sources; rf power sources; accelerator structures; instrumentation; final focus; and review of beam-beam interaction

  4. Proceedings of the international workshop on next-generation linear colliders

    Riordan, M. (ed.)

    1988-12-01

    This report contains papers on the next-generation of linear colliders. The particular areas of discussion are: parameters; beam dynamics and wakefields; damping rings and sources; rf power sources; accelerator structures; instrumentation; final focus; and review of beam-beam interaction.

  5. Permitted and forbidden sets in symmetric threshold-linear networks.

    Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques

    2003-03-01

    The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.

  6. Radiation hormesis and the linear-no-threshold assumption

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  7. Stepwise threshold clustering: a new method for genotyping MHC loci using next-generation sequencing technology.

    William E Stutz

    Full Text Available Genes of the vertebrate major histocompatibility complex (MHC are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms. Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1 a "gray zone" where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2 a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci--Stepwise Threshold Clustering (STC--that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications.

  8. An experimental test of the linear no-threshold theory of radiation carcinogenesis

    Cohen, B.L.

    1990-01-01

    There is a substantial body of quantitative information on radiation-induced cancer at high dose, but there are no data at low dose. The usual method for estimating effects of low-level radiation is to assume a linear no-threshold dependence. if this linear no-threshold assumption were not used, essentially all fears about radiation would disappear. Since these fears are costing tens of billions of dollars, it is most important that the linear no-threshold theory be tested at low dose. An opportunity for possibly testing the linear no-threshold concept is now available at low dose due to radon in homes. The purpose of this paper is to attempt to use this data to test the linear no-threshold theory

  9. Proceedings of the Fifth International Workshop on Next-Generation Linear Colliders. Addendum

    Paterson, J.M.; Asher, K.

    1993-01-01

    This report contains viewgraphs on the following topics: Electron and positron sources and injectors; damping rings, bunch compressors and pre-accelerators; RF sources and structures for normal and superconducting linacs; beam dynamics of the main accelerator; instrumentation for linear colliders; final focus and interaction regions; and overall parameters and construction techniques

  10. Low level radiation: how does the linear without threshold model provide the safety of Canadian

    Anon.

    2010-01-01

    The linear without threshold model is a model of risk used worldwide by the most of health organisms of nuclear regulation in order to establish dose limits for workers and public. It is in the heart of the approach adopted by the Canadian commission of nuclear safety (C.C.S.N.) in matter of radiation protection. The linear without threshold model presumes reasonably it exists a direct link between radiation exposure and cancer rate. It does not exist scientific evidence that chronicle exposure to radiation doses under 100 milli sievert (mSv) leads harmful effects on health. Several scientific reports highlighted scientific evidences that seem indicate a low level of radiation is less harmful than the linear without threshold predicts. As the linear without threshold model presumes that any radiation exposure brings risks, the ALARA principle obliges the licensees to get the radiation exposure at the lowest reasonably achievable level, social and economical factors taken into account. ALARA principle constitutes a basic principle in the C.C.S.N. approach in matter of radiation protection; On the radiation protection plan, C.C.S.N. gets a careful approach that allows to provide health and safety of Canadian people and the protection of their environment. (N.C.)

  11. The non-linear link between electricity consumption and temperature in Europe: A threshold panel approach

    Bessec, Marie [CGEMP, Universite Paris-Dauphine, Place du Marechal de Lattre de Tassigny Paris (France); Fouquau, Julien [LEO, Universite d' Orleans, Faculte de Droit, d' Economie et de Gestion, Rue de Blois, BP 6739, 45067 Orleans Cedex 2 (France)

    2008-09-15

    This paper investigates the relationship between electricity demand and temperature in the European Union. We address this issue by means of a panel threshold regression model on 15 European countries over the last two decades. Our results confirm the non-linearity of the link between electricity consumption and temperature found in more limited geographical areas in previous studies. By distinguishing between North and South countries, we also find that this non-linear pattern is more pronounced in the warm countries. Finally, rolling regressions show that the sensitivity of electricity consumption to temperature in summer has increased in the recent period. (author)

  12. Checking the foundation: recent radiobiology and the linear no-threshold theory.

    Ulsh, Brant A

    2010-12-01

    The linear no-threshold (LNT) theory has been adopted as the foundation of radiation protection standards and risk estimation for several decades. The "microdosimetric argument" has been offered in support of the LNT theory. This argument postulates that energy is deposited in critical cellular targets by radiation in a linear fashion across all doses down to zero, and that this in turn implies a linear relationship between dose and biological effect across all doses. This paper examines whether the microdosimetric argument holds at the lowest levels of biological organization following low dose, low dose-rate exposures to ionizing radiation. The assumptions of the microdosimetric argument are evaluated in light of recent radiobiological studies on radiation damage in biological molecules and cellular and tissue level responses to radiation damage. There is strong evidence that radiation initially deposits energy in biological molecules (e.g., DNA) in a linear fashion, and that this energy deposition results in various forms of prompt DNA damage that may be produced in a pattern that is distinct from endogenous (e.g., oxidative) damage. However, a large and rapidly growing body of radiobiological evidence indicates that cell and tissue level responses to this damage, particularly at low doses and/or dose-rates, are nonlinear and may exhibit thresholds. To the extent that responses observed at lower levels of biological organization in vitro are predictive of carcinogenesis observed in vivo, this evidence directly contradicts the assumptions upon which the microdosimetric argument is based.

  13. A Near-Threshold Shape Resonance in the Valence-Shell Photoabsorption of Linear Alkynes

    Jacovella, U.; Holland, D. M. P.; Boyé-Péronne, S.; Gans, Bérenger; de Oliveira, N.; Ito, K.; Joyeux, D.; Archer, L. E.; Lucchese, R. R.; Xu, Hong; Pratt, S. T.

    2015-12-17

    The room-temperature photoabsorption spectra of a number of linear alkynes with internal triple bonds (e.g., 2-butyne, 2-pentyne, and 2- and 3-hexyne) show similar resonances just above the lowest ionization threshold of the neutral molecules. These features result in a substantial enhancement of the photoabsorption cross sections relative to the cross sections of alkynes with terminal triple bonds (e.g., propyne, 1-butyne, 1-pentyne,...). Based on earlier work on 2-butyne [Xu et al., J. Chem. Phys. 2012, 136, 154303], these features are assigned to excitation from the neutral highest occupied molecular orbital (HOMO) to a shape resonance with g (l = 4) character and approximate pi symmetry. This generic behavior results from the similarity of the HOMOs in all internal alkynes, as well as the similarity of the corresponding g pi virtual orbital in the continuum. Theoretical calculations of the absorption spectrum above the ionization threshold for the 2- and 3-alkynes show the presence of a shape resonance when the coupling between the two degenerate or nearly degenerate pi channels is included, with a dominant contribution from l = 4. These calculations thus confirm the qualitative arguments for the importance of the l = 4 continuum near threshold for internal alkynes, which should also apply to other linear internal alkynes and alkynyl radicals. The 1-alkynes do not have such high partial waves present in the shape resonance. The lower l partial waves in these systems are consistent with the broader features observed in the corresponding spectra.

  14. Linear, no threshold response at low doses of ionizing radiation: ideology, prejudice and science

    Kesavan, P.C.

    2014-01-01

    The linear, no threshold (LNT) response model assumes that there is no threshold dose for the radiation-induced genetic effects (heritable mutations and cancer), and it forms the current basis for radiation protection standards for radiation workers and the general public. The LNT model is, however, based more on ideology than valid radiobiological data. Further, phenomena such as 'radiation hormesis', 'radioadaptive response', 'bystander effects' and 'genomic instability' are now demonstrated to be radioprotective and beneficial. More importantly, the 'differential gene expression' reveals that qualitatively different proteins are induced by low and high doses. This finding negates the LNT model which assumes that qualitatively similar proteins are formed at all doses. Thus, all available scientific data challenge the LNT hypothesis. (author)

  15. Genetic evaluation of calf and heifer survival in Iranian Holstein cattle using linear and threshold models.

    Forutan, M; Ansari Mahyari, S; Sargolzaei, M

    2015-02-01

    Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. © 2014 Blackwell Verlag GmbH.

  16. Test of the linear-no threshold theory of radiation carcinogenesis

    Cohen, B.L.

    1994-01-01

    We recently completed a compilation of radon measurements from available sources which gives the average radon level, in homes for 1730 counties, well over half of all U.S. counties and comprising about 90% of the total U.S. population. Epidemiologists normally study the relationship between mortality risks to individuals, m, vs their personal exposure, r, whereas an ecological study like ours deals with the relationship between the average risk to groups of individuals (population of counties) and their average exposure. It is well known to epidemiologists that, in general, the average dose does not determine the average risk, and to assume otherwise is called 'the ecological fallacy'. However, it is easy to show that, in testing a linear-no threshold theory, 'the ecological fallacy' does not apply; in that theory, the average dose does determine the average risk. This is widely recognized from the fact that 'person-rem' determines the number of deaths. Dividing person-rem by population gives average dose, and dividing number of deaths by population gives mortality rate. Because of the 'ecological fallacy', epidemiology textbooks often state that an ecological study cannot determine a causal relationship between risk and exposure. That may be true, but it is irrelevant here because the purpose of our study is not to determine a causal relationship; it is rather to test the linear-no threshold dependence of m on r. (author)

  17. Test of the linear-no threshold theory of radiation carcinogenesis for inhaled radon decay products

    Cohen, B.L.

    1995-01-01

    Data on lung cancer mortality rates vs. average radon concentration in homes for 1,601 U.S. counties are used to test the linear-no threshold theory. The widely recognized problems with ecological studies, as applied to this work, are addressed extensively. With or without corrections for variations in smoking prevalence, there is a strong tendency for lung cancer rates to decrease with increasing radon exposure, in sharp contrast to the increase expected from the theory. The discrepancy in slope is about 20 standard deviations. It is shown that uncertainties in lung cancer rates, radon exposures, and smoking prevalence are not important and that confounding by 54 socioeconomic factors, by geography, and by altitude and climate can explain only a small fraction of the discrepancy. Effects of known radon-smoking prevalence correlations - rural people have higher radon levels and smoke less than urban people, and smokers are exposed to less radon than non-smokers - are calculated and found to be trivial. In spite of extensive efforts, no potential explanation for the discrepancy other than failure of the linear-no threshold theory for carcinogenesis from inhaled radon decay products could be found. (author)

  18. A test of the linear-no threshold theory of radiation carcinogenesis

    Cohen, B.L.

    1990-01-01

    It has been pointed out that, while an ecological study cannot determine whether radon causes lung cancer, it can test the validity of a linear-no threshold relationship between them. The linear-no threshold theory predicts a substantial positive correlation between the average radon exposure in various counties and their lung cancer mortality rates. Data on living areas of houses in 411 counties from all parts of the United States exhibit, rather, a substantial negative correlation with the slopes of the lines of regression differing from zero by 10 and 7 standard deviations for males and females, respectively, and from the positive slope predicted by the theory by at least 16 and 12 standard deviations. When the data are segmented into 23 groups of states or into 7 regions of the country, the predominantly negative slopes and correlations persist, applying to 18 of the 23 state groups and 6 of the 7 regions. Five state-sponsored studies are analyzed, and four of these give a strong negative slope (the other gives a weak positive slope, in agreement with our data for that state). A strong negative slope is also obtained in our data on basements in 253 counties. A random selection-no charge study of 39 high and low lung cancer counties (+4 low population states) gives a much stronger negative correlation. When nine potential confounding factors are included in a multiple linear regression analysis, the discrepancy with theory is reduced only to 12 and 8.5 standard deviations for males and females, respectively. When the data are segmented into four groups by population, the multiple regression vs radon level gives a strong negative slope for each of the four groups. Other considerations are introduced to reduce the discrepancy, but it remains very substantial

  19. Linear non-threshold (LNT) radiation hazards model and its evaluation

    Min Rui

    2011-01-01

    In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)

  20. The risk of low doses of ionising radiation and the linear no threshold relationship debate

    Tubiana, M.; Masse, R.; Vathaire, F. de; Averbeck, D.; Aurengo, A.

    2007-01-01

    The ICRP and the B.E.I.R. VII reports recommend a linear no threshold (L.N.T.) relationship for the estimation of cancer excess risk induced by ionising radiations (IR), but the 2005 report of Medicine and Science French Academies concludes that it leads to overestimate of risk for low and very low doses. The bases of L.N.T. are challenged by recent biological and animal experimental studies which show that the defence against IR involves the cell microenvironment and the immunologic system. The defence mechanisms against low doses are different and comparatively more effective than for high doses. Cell death is predominant against low doses. DNA repairing is activated against high doses, in order to preserve tissue functions. These mechanisms provide for multicellular organisms an effective and low cost defence system. The differences between low and high doses defence mechanisms are obvious for alpha emitters which show several greys threshold effects. These differences result in an impairment of epidemiological studies which, for statistical power purpose, amalgamate high and low doses exposure data, since it would imply that cancer IR induction and defence mechanisms are similar in both cases. Low IR dose risk estimates should rely on specific epidemiological studies restricted to low dose exposures and taking precisely into account potential confounding factors. The preliminary synthesis of cohort studies for which low dose data (< 100 mSv) were available show no significant risk excess, neither for solid cancer nor for leukemias. (authors)

  1. A biological basis for the linear non-threshold dose-response relationship for low-level carcinogen exposure

    Albert, R.E.

    1981-01-01

    This chapter examines low-level dose-response relationships in terms of the two-stage mouse tumorigenesis model. Analyzes the feasibility of the linear non-threshold dose-response model which was first adopted for use in the assessment of cancer risks from ionizing radiation and more recently from chemical carcinogens. Finds that both the interaction of B(a)P with epidermal DNA of the mouse skin and the dose-response relationship for the initiation stage of mouse skin tumorigenesis showed a linear non-threshold dose-response relationship. Concludes that low level exposure to environmental carcinogens has a linear non-threshold dose-response relationship with the carcinogen acting as an initiator and the promoting action being supplied by the factors that are responsible for the background cancer rate in the target tissue

  2. Polarization properties of below-threshold harmonics from aligned molecules H2+ in linearly polarized laser fields.

    Dong, Fulong; Tian, Yiqun; Yu, Shujuan; Wang, Shang; Yang, Shiping; Chen, Yanjun

    2015-07-13

    We investigate the polarization properties of below-threshold harmonics from aligned molecules in linearly polarized laser fields numerically and analytically. We focus on lower-order harmonics (LOHs). Our simulations show that the ellipticity of below-threshold LOHs depends strongly on the orientation angle and differs significantly for different harmonic orders. Our analysis reveals that this LOH ellipticity is closely associated with resonance effects and the axis symmetry of the molecule. These results shed light on the complex generation mechanism of below-threshold harmonics from aligned molecules.

  3. Groundwater decline and tree change in floodplain landscapes: Identifying non-linear threshold responses in canopy condition

    J. Kath

    2014-12-01

    Full Text Available Groundwater decline is widespread, yet its implications for natural systems are poorly understood. Previous research has revealed links between groundwater depth and tree condition; however, critical thresholds which might indicate ecological ‘tipping points’ associated with rapid and potentially irreversible change have been difficult to quantify. This study collated data for two dominant floodplain species, Eucalyptus camaldulensis (river red gum and E. populnea (poplar box from 118 sites in eastern Australia where significant groundwater decline has occurred. Boosted regression trees, quantile regression and Threshold Indicator Taxa Analysis were used to investigate the relationship between tree condition and groundwater depth. Distinct non-linear responses were found, with groundwater depth thresholds identified in the range from 12.1 m to 22.6 m for E. camaldulensis and 12.6 m to 26.6 m for E. populnea beyond which canopy condition declined abruptly. Non-linear threshold responses in canopy condition in these species may be linked to rooting depth, with chronic groundwater decline decoupling trees from deep soil moisture resources. The quantification of groundwater depth thresholds is likely to be critical for management aimed at conserving groundwater dependent biodiversity. Identifying thresholds will be important in regions where water extraction and drying climates may contribute to further groundwater decline. Keywords: Canopy condition, Dieback, Drought, Tipping point, Ecological threshold, Groundwater dependent ecosystems

  4. Mirror structures above and below the linear instability threshold: Cluster observations, fluid model and hybrid simulations

    V. Génot

    2009-02-01

    Full Text Available Using 5 years of Cluster data, we present a detailed statistical analysis of magnetic fluctuations associated with mirror structures in the magnetosheath. We especially focus on the shape of these fluctuations which, in addition to quasi-sinusoidal forms, also display deep holes and high peaks. The occurrence frequency and the most probable location of the various types of structures is discussed, together with their relation to local plasma parameters. While these properties have previously been correlated to the β of the plasma, we emphasize here the influence of the distance to the linear mirror instability threshold. This enables us to interpret the observations of mirror structures in a stable plasma in terms of bistability and subcritical bifurcation. The data analysis is supplemented by the prediction of a quasi-static anisotropic MHD model and hybrid numerical simulations in an expanding box aimed at mimicking the magnetosheath plasma. This leads us to suggest a scenario for the formation and evolution of mirror structures.

  5. Test of the linear-no threshold theory of radiation carcinogenesis

    Cohen, B.L.

    1998-01-01

    It is shown that testing the linear-no threshold theory (L-NT) of radiation carcinogenesis is extremely important and that lung cancer resulting from exposure to radon in homes is the best tool for doing this. A study of lung cancer rates vs radon exposure in U.S. Counties, reported in 1975, is reviewed. It shows, with extremely powerful statistics, that lung cancer rates decrease with increasing radon exposure, in sharp contrast to the prediction of L-NT, with a discrepancy of over 20 standard deviations. Very extensive efforts were made to explain an appreciable part of this discrepancy consistently with L-NT, with no success; it was concluded that L-NT fails, grossly exaggerating the cancer risk of low level radiation. Two updating studies reported in 1996 are also reviewed. New updating studies utilizing more recent lung cancer statistics and considering 450 new potential confounding factors are reported. All updates reinforce the previous conclusion, and the discrepancy with L-NT is increased. (author)

  6. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Validity of the linear no-threshold theory of radiation carcinogenesis at low doses

    Cohen, B.L.

    1999-01-01

    A great deal is known about the cancer risk of high radiation doses from studies of Japanese A-bomb survivors, patients exposed for medical therapy, occupational exposures, etc. But the vast majority of important applications deal with much lower doses, usually accumulated at much lower dose rates, referred to as 'low-level radiation' (LLR). Conventionally, the cancer risk from LLR has been estimated by the use of linear no-threshold theory (LNT). For example, it is assumed that the cancer risk from 0 01 Sr (100 mrem) of dose is 0 01 times the risk from 1 Sv (100 rem). In recent years, the former risk estimates have often been reduced by a 'dose and dose rate reduction factor', which is taken to be a factor of 2. But otherwise, the LNT is frequently used for doses as low as one hundred-thousandth of those for which there is direct evidence of cancer induction by radiation. It is the origin of the commonly used expression 'no level of radiation is safe' and the consequent public fear of LLR. The importance of this use of the LNT can not be exaggerated and is used in many applications in the nuclear industry. The LNT paradigm has also been carried over to chemical carcinogens, leading to severe restrictions on use of cleaning fluids, organic chemicals, pesticides, etc. If the LNT were abandoned for radiation, it would probably also be abandoned for chemical carcinogens. In view of these facts, it is important to consider the validity of the LNT. That is the purpose of this paper. (author)

  8. Genomic analysis of cow mortality and milk production using a threshold-linear model.

    Tsuruta, S; Lourenco, D A L; Misztal, I; Lawlor, T J

    2017-09-01

    The objective of this study was to investigate the feasibility of genomic evaluation for cow mortality and milk production using a single-step methodology. Genomic relationships between cow mortality and milk production were also analyzed. Data included 883,887 (866,700) first-parity, 733,904 (711,211) second-parity, and 516,256 (492,026) third-parity records on cow mortality (305-d milk yields) of Holsteins from Northeast states in the United States. The pedigree consisted of up to 1,690,481 animals including 34,481 bulls genotyped with 36,951 SNP markers. Analyses were conducted with a bivariate threshold-linear model for each parity separately. Genomic information was incorporated as a genomic relationship matrix in the single-step BLUP. Traditional and genomic estimated breeding values (GEBV) were obtained with Gibbs sampling using fixed variances, whereas reliabilities were calculated from variances of GEBV samples. Genomic EBV were then converted into single nucleotide polymorphism (SNP) marker effects. Those SNP effects were categorized according to values corresponding to 1 to 4 standard deviations. Moving averages and variances of SNP effects were calculated for windows of 30 adjacent SNP, and Manhattan plots were created for SNP variances with the same window size. Using Gibbs sampling, the reliability for genotyped bulls for cow mortality was 28 to 30% in EBV and 70 to 72% in GEBV. The reliability for genotyped bulls for 305-d milk yields was 53 to 65% to 81 to 85% in GEBV. Correlations of SNP effects between mortality and 305-d milk yields within categories were the highest with the largest SNP effects and reached >0.7 at 4 standard deviations. All SNP regions explained less than 0.6% of the genetic variance for both traits, except regions close to the DGAT1 gene, which explained up to 2.5% for cow mortality and 4% for 305-d milk yields. Reliability for GEBV with a moderate number of genotyped animals can be calculated by Gibbs samples. Genomic

  9. Evapotranspiration patterns in complex upland forests reveal contrasting topographic thresholds of non-linearity

    Metzen, D.; Sheridan, G. J.; Benyon, R. G.; Bolstad, P. V.; Nyman, P.; Lane, P. N. J.

    2017-12-01

    Large areas of forest are often treated as being homogeneous just because they fall in a single climate category. However, we observe strong vegetation patterns in relation to topography in SE Australian forests and thus hypothesise that ET will vary spatially as well. Spatial heterogeneity evolves over different temporal scales in response to climatic forcing with increasing time lag from soil moisture (sub-yearly), to vegetation (10s -100s of years) to soil properties and topography (>100s of years). Most importantly, these processes and time scales are not independent, creating feedbacks that result in "co-evolved stable states" which yield the current spatial terrain, vegetation and ET patterns. We used up-scaled sap flux and understory ET measurements from water-balance plots, as well as LiDAR derived terrain and vegetation information, to infer links between spatio-temporal energy and water fluxes, topography and vegetation patterns at small catchment scale. Topography caused variations in aridity index between polar and equatorial-facing slopes (1.3 vs 1.8), which in turn manifested in significant differences in sapwood area index (6.9 vs 5.8), overstory LAI (3.0 vs 2.3), understory LAI (0.5 vs 0.4), sub-canopy radiation load (4.6 vs 6.8 MJ m-2 d-1), overstory transpiration (501 vs 347 mm a-1) and understory ET (79 vs 155 mm a-1). Large spatial variation in overstory transpiration (195 to 891 mm a-1) was observed over very short distances (100s m); a range representative of diverse forests such as arid open woodlands and wet mountain ash forests. Contrasting, non-linear overstory and understory ET patterns were unveiled between aspects, and topographic thresholds were lower for overstory than understory ET. While ET partitioning remained stable on polar-facing slopes regardless of slope position, overstory contribution gradually decreased with increasing slope inclination on equatorial aspects. Further, we show that ET patterns and controls underlie strong

  10. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  11. Thresholds, switches and hysteresis in hydrology from the pedon to the catchment scale: a non-linear systems theory

    2007-01-01

    Full Text Available Hysteresis is a rate-independent non-linearity that is expressed through thresholds, switches, and branches. Exceedance of a threshold, or the occurrence of a turning point in the input, switches the output onto a particular output branch. Rate-independent branching on a very large set of switches with non-local memory is the central concept in the new definition of hysteresis. Hysteretic loops are a special case. A self-consistent mathematical description of hydrological systems with hysteresis demands a new non-linear systems theory of adequate generality. The goal of this paper is to establish this and to show how this may be done. Two results are presented: a conceptual model for the hysteretic soil-moisture characteristic at the pedon scale and a hysteretic linear reservoir at the catchment scale. Both are based on the Preisach model. A result of particular significance is the demonstration that the independent domain model of the soil moisture characteristic due to Childs, Poulavassilis, Mualem and others, is equivalent to the Preisach hysteresis model of non-linear systems theory, a result reminiscent of the reduction of the theory of the unit hydrograph to linear systems theory in the 1950s. A significant reduction in the number of model parameters is also achieved. The new theory implies a change in modelling paradigm.

  12. Linear-No-Threshold Default Assumptions for Noncancer and Nongenotoxic Cancer Risks: A Mathematical and Biological Critique.

    Bogen, Kenneth T

    2016-03-01

    To improve U.S. Environmental Protection Agency (EPA) dose-response (DR) assessments for noncarcinogens and for nonlinear mode of action (MOA) carcinogens, the 2009 NRC Science and Decisions Panel recommended that the adjustment-factor approach traditionally applied to these endpoints should be replaced by a new default assumption that both endpoints have linear-no-threshold (LNT) population-wide DR relationships. The panel claimed this new approach is warranted because population DR is LNT when any new dose adds to a background dose that explains background levels of risk, and/or when there is substantial interindividual heterogeneity in susceptibility in the exposed human population. Mathematically, however, the first claim is either false or effectively meaningless and the second claim is false. Any dose-and population-response relationship that is statistically consistent with an LNT relationship may instead be an additive mixture of just two quasi-threshold DR relationships, which jointly exhibit low-dose S-shaped, quasi-threshold nonlinearity just below the lower end of the observed "linear" dose range. In this case, LNT extrapolation would necessarily overestimate increased risk by increasingly large relative magnitudes at diminishing values of above-background dose. The fact that chemically-induced apoptotic cell death occurs by unambiguously nonlinear, quasi-threshold DR mechanisms is apparent from recent data concerning this quintessential toxicity endpoint. The 2009 NRC Science and Decisions Panel claims and recommendations that default LNT assumptions be applied to DR assessment for noncarcinogens and nonlinear MOA carcinogens are therefore not justified either mathematically or biologically. © 2015 The Author. Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  13. Multi-stratified multiple regression tests of the linear/no-threshold theory of radon-induced lung cancer

    Cohen, B.L.

    1992-01-01

    A plot of lung-cancer rates versus radon exposures in 965 US counties, or in all US states, has a strong negative slope, b, in sharp contrast to the strong positive slope predicted by linear/no-threshold theory. The discrepancy between these slopes exceeds 20 standard deviations (SD). Including smoking frequency in the analysis substantially improves fits to a linear relationship but has little effect on the discrepancy in b, because correlations between smoking frequency and radon levels are quite weak. Including 17 socioeconomic variables (SEV) in multiple regression analysis reduces the discrepancy to 15 SD. Data were divided into segments by stratifying on each SEV in turn, and on geography, and on both simultaneously, giving over 300 data sets to be analyzed individually, but negative slopes predominated. The slope is negative whether one considers only the most urban counties or only the most rural; only the richest or only the poorest; only the richest in the South Atlantic region or only the poorest in that region, etc., etc.,; and for all the strata in between. Since this is an ecological study, the well-known problems with ecological studies were investigated and found not to be applicable here. The open-quotes ecological fallacyclose quotes was shown not to apply in testing a linear/no-threshold theory, and the vulnerability to confounding is greatly reduced when confounding factors are only weakly correlated with radon levels, as is generally the case here. All confounding factors known to correlate with radon and with lung cancer were investigated quantitatively and found to have little effect on the discrepancy

  14. The oscillatory behavior of heated channels: an analysis of the density effect. Part I. The mechanism (non linear analysis). Part II. The oscillations thresholds (linearized analysis)

    Boure, J.

    1967-01-01

    The problem of the oscillatory behavior of heated channels is presented in terms of delay-times and a density effect model is proposed to explain the behavior. The density effect is the consequence of the physical relationship between enthalpy and density of the fluid. In the first part non-linear equations are derived from the model in a dimensionless form. A description of the mechanism of oscillations is given, based on the analysis of the equations. An inventory of the governing parameters is established. At this point of the study, some facts in agreement with the experiments can be pointed out. In the second part the start of the oscillatory behavior of heated channels is studied in terms of the density effect. The threshold equations are derived, after linearization of the equations obtained in Part I. They can be solved rigorously by numerical methods to yield: -1) a relation between the describing parameters at the onset of oscillations, and -2) the frequency of the oscillations. By comparing the results predicted by the model to the experimental behavior of actual systems, the density effect is very often shown to be the actual cause of oscillatory behaviors. (author) [fr

  15. Molecular biology, epidemiology, and the demise of the linear no-threshold hypothesis

    Pollycove, M.

    1998-01-01

    The LNT hypothesis is the basic principle of all radiation protection policy. This theory assumes that all radiation doses, even those close to zero, are harmful in linear proportion to dose and that all doses produce a proportionate number of harmful mutations, i.e., mis- or unrepaired DNA alterations. The LNT theory is used to generate collective dose calculations of the number of deaths produced by minute fractions of background radiation. Current molecular biology reveals an enormous amount of relentless metabolic oxidative free radical damage with mis/unrepaired alterations of DNA. The corresponding mis/unrepaired DNA alterations produced by background radiation are negligible. These DNA alterations are effectively disposed of by the DNA damage-control biosystem of antioxidant prevention, enzymatic repair, and mutation removal. High-dose radiation injures this biosystem with associated risk increments of mortality and cancer mortality. Low-dose radiation stimulates DNA damage-control with associated epidemiologic observations of risk decrements of mortality and cancer mortality, i.e., hormesis. How can this 40-year-old LNT paradigm continue to be the operative principle of radiation protection policy despite the contradictory scientific observations of both molecular biology and epidemiology and the lack of any supportive human data? The increase of public fear through repeated statements of deaths caused by 'deadly' radiation has engendered an enormous increase in expenditures now required to 'protect' the public from all applications of nuclear technology: medical, research, energy, disposal, and cleanup remediation. Government funds are allocated to appointed committees, the research they support, and to multiple environmental and regulatory agencies. The LNT theory and multibillion dollar radiation activities have now become a symbiotic self-sustaining powerful political and economic force. (author)

  16. Stability Analysis of Continuous-Time and Discrete-Time Quaternion-Valued Neural Networks With Linear Threshold Neurons.

    Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong

    2018-07-01

    This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.

  17. Demystifying nuclear power: the linear non-threshold model and its use for evaluating radiation effects on living organisms

    Ramos, Alexandre F.; Vasconcelos, Miguel F.; Vergueiro, Sophia M. C.; Lima, Suzylaine S., E-mail: alex.ramos@usp.br [Universidade de São Paulo (USP), SP (Brazil). Núcleo Interdisciplinar de Modelagem de Sistemas Complexos

    2017-07-01

    Recently, a new variable has been introduced on nuclear power expansion policy: public opinion. That variable challenges the nuclear community to develop new programs aiming to educate society sectors interested on energy generation and not necessarily familiarized with concepts of the nuclear eld. Here we approach this challenge by discussing how a misconception about the use of theories in science has misled the interpretation of the Chernobyl's accident consequences. That discussion have been presented for students from fields related with Environmental Sciences and Humanities and have helped to elucidate that an extrapolation such as the Linear Non-Threshold model is a hypothesis to be tested experimentally instead of a theoretical tool with predictive power. (author)

  18. Demystifying nuclear power: the linear non-threshold model and its use for evaluating radiation effects on living organisms

    Ramos, Alexandre F.; Vasconcelos, Miguel F.; Vergueiro, Sophia M. C.; Lima, Suzylaine S.

    2017-01-01

    Recently, a new variable has been introduced on nuclear power expansion policy: public opinion. That variable challenges the nuclear community to develop new programs aiming to educate society sectors interested on energy generation and not necessarily familiarized with concepts of the nuclear eld. Here we approach this challenge by discussing how a misconception about the use of theories in science has misled the interpretation of the Chernobyl's accident consequences. That discussion have been presented for students from fields related with Environmental Sciences and Humanities and have helped to elucidate that an extrapolation such as the Linear Non-Threshold model is a hypothesis to be tested experimentally instead of a theoretical tool with predictive power. (author)

  19. Top quark threshold scan and study of detectors for highly granular hadron calorimeters at future linear colliders

    Tesar, Michal

    2014-01-01

    Two major projects for future linear electron-positron colliders, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC), are currently under development. These projects can be seen as complementary machines to the Large Hadron Collider (LHC) which permit a further progress in high energy physics research. They overlap considerably and share the same technological approaches. To meet the ambitious goals of precise measurements, new detector concepts like very finely segmented calorimeters are required. We study the precision of the top quark mass measurement achievable at CLIC and the ILC. The employed method was a t anti t pair production threshold scan. In this technique, simulated measurement points of the t anti t production cross section around the threshold are fitted with theoretical curves calculated at next-to-next-to-leading order. Detector effects, the influence of the beam energy spectrum and initial state radiation of the colliding particles are taken into account. Assuming total integrated luminosity of 100 fb -1 , our results show that the top quark mass in a theoretically well-defined 1S mass scheme can be extracted with a combined statistical and systematic uncertainty of less than 50 MeV. The other part of this work regards experimental studies of highly granular hadron calorimeter (HCAL) elements. To meet the required high jet energy resolution at the future linear colliders, a large and finely segmented detector is needed. One option is to assemble a sandwich calorimeter out of many low-cost scintillators read out by silicon photomultipliers (SiPM). We characterize the areal homogeneity of SiPM response with the help of a highly collimated beam of pulsed visible light. The spatial resolution of the experiment reach the order of 1 μm and allows to study the active area structures within single SiPM microcells. Several SiPM models are characterized in terms of relative photon detection efficiency and probability crosstalk

  20. On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith

    Calabrese, Edward J.

    2015-01-01

    This paper is an historical assessment of how prominent radiation geneticists in the United States during the 1940s and 1950s successfully worked to build acceptance for the linear no-threshold (LNT) dose–response model in risk assessment, significantly impacting environmental, occupational and medical exposure standards and practices to the present time. Detailed documentation indicates that actions taken in support of this policy revolution were ideologically driven and deliberately and deceptively misleading; that scientific records were artfully misrepresented; and that people and organizations in positions of public trust failed to perform the duties expected of them. Key activities are described and the roles of specific individuals are documented. These actions culminated in a 1956 report by a Genetics Panel of the U.S. National Academy of Sciences (NAS) on Biological Effects of Atomic Radiation (BEAR). In this report the Genetics Panel recommended that a linear dose response model be adopted for the purpose of risk assessment, a recommendation that was rapidly and widely promulgated. The paper argues that current international cancer risk assessment policies are based on fraudulent actions of the U.S. NAS BEAR I Committee, Genetics Panel and on the uncritical, unquestioning and blind-faith acceptance by regulatory agencies and the scientific community. - Highlights: • The 1956 recommendation of the US NAS to use the LNT for risk assessment was adopted worldwide. • This recommendation is based on a falsification of the research record and represents scientific misconduct. • The record misrepresented the magnitude of panelist disagreement of genetic risk from radiation. • These actions enhanced public acceptance of their risk assessment policy recommendations.

  1. On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith

    Calabrese, Edward J., E-mail: edwardc@schoolph.umass.edu

    2015-10-15

    This paper is an historical assessment of how prominent radiation geneticists in the United States during the 1940s and 1950s successfully worked to build acceptance for the linear no-threshold (LNT) dose–response model in risk assessment, significantly impacting environmental, occupational and medical exposure standards and practices to the present time. Detailed documentation indicates that actions taken in support of this policy revolution were ideologically driven and deliberately and deceptively misleading; that scientific records were artfully misrepresented; and that people and organizations in positions of public trust failed to perform the duties expected of them. Key activities are described and the roles of specific individuals are documented. These actions culminated in a 1956 report by a Genetics Panel of the U.S. National Academy of Sciences (NAS) on Biological Effects of Atomic Radiation (BEAR). In this report the Genetics Panel recommended that a linear dose response model be adopted for the purpose of risk assessment, a recommendation that was rapidly and widely promulgated. The paper argues that current international cancer risk assessment policies are based on fraudulent actions of the U.S. NAS BEAR I Committee, Genetics Panel and on the uncritical, unquestioning and blind-faith acceptance by regulatory agencies and the scientific community. - Highlights: • The 1956 recommendation of the US NAS to use the LNT for risk assessment was adopted worldwide. • This recommendation is based on a falsification of the research record and represents scientific misconduct. • The record misrepresented the magnitude of panelist disagreement of genetic risk from radiation. • These actions enhanced public acceptance of their risk assessment policy recommendations.

  2. Gradient-driven flux-tube simulations of ion temperature gradient turbulence close to the non-linear threshold

    Peeters, A. G.; Rath, F.; Buchholz, R.; Grosshauser, S. R.; Strintzi, D.; Weikl, A. [Physics Department, University of Bayreuth, Universitätsstrasse 30, Bayreuth (Germany); Camenen, Y. [Aix Marseille Univ, CNRS, PIIM, UMR 7345, Marseille (France); Candy, J. [General Atomics, PO Box 85608, San Diego, California 92186-5608 (United States); Casson, F. J. [CCFE, Culham Science Centre, Abingdon OX14 3DB, Oxon (United Kingdom); Hornsby, W. A. [Max Planck Institut für Plasmaphysik, Boltzmannstrasse 2 85748 Garching (Germany)

    2016-08-15

    It is shown that Ion Temperature Gradient turbulence close to the threshold exhibits a long time behaviour, with smaller heat fluxes at later times. This reduction is connected with the slow growth of long wave length zonal flows, and consequently, the numerical dissipation on these flows must be sufficiently small. Close to the nonlinear threshold for turbulence generation, a relatively small dissipation can maintain a turbulent state with a sizeable heat flux, through the damping of the zonal flow. Lowering the dissipation causes the turbulence, for temperature gradients close to the threshold, to be subdued. The heat flux then does not go smoothly to zero when the threshold is approached from above. Rather, a finite minimum heat flux is obtained below which no fully developed turbulent state exists. The threshold value of the temperature gradient length at which this finite heat flux is obtained is up to 30% larger compared with the threshold value obtained by extrapolating the heat flux to zero, and the cyclone base case is found to be nonlinearly stable. Transport is subdued when a fully developed staircase structure in the E × B shearing rate forms. Just above the threshold, an incomplete staircase develops, and transport is mediated by avalanche structures which propagate through the marginally stable regions.

  3. HERITABILITY AND BREEDING VALUE OF SHEEP FERTILITY ESTIMATED BY MEANS OF THE GIBBS SAMPLING METHOD USING THE LINEAR AND THRESHOLD MODELS

    DARIUSZ Piwczynski

    2013-03-01

    Full Text Available The research was carried out on 4,030 Polish Merino ewes born in the years 1991- 2001, kept in 15 flocks from the Pomorze and Kujawy region. Fertility of ewes in subsequent reproduction seasons was analysed with the use of multiple logistic regression. The research showed that there is a statistical influence of the flock, year of birth, age of dam, flock year interaction of birth on the ewes fertility. In order to estimate the genetic parameters, the Gibbs sampling method was applied, using the univariate animal models, both linear as well as threshold. Estimates of fertility depending on the model equalled 0.067 to 0.104, whereas the estimates of repeatability equalled respectively: 0.076 and 0.139. The obtained genetic parameters were then used to estimate the breeding values of the animals in terms of controlled trait (Best Linear Unbiased Prediction method using linear and threshold models. The obtained animal breeding values rankings in respect of the same trait with the use of linear and threshold models were strongly correlated with each other (rs = 0.972. Negative genetic trends of fertility (0.01-0.08% per year were found.

  4. Threshold-linear analysis of measures of fertility in artificial insemination data and days to calving in beef cattle.

    Donoghue, K A; Rekaya, R; Bertrand, J K; Misztal, I

    2004-04-01

    Mating and calving records for 47,533 first-calf heifers in Australian Angus herds were used to examine the relationship between days to calving (DC) and two measures of fertility in AI data: 1) calving to first insemination (CFI) and 2) calving success (CS). Calving to first insemination and calving success were defined as binary traits. A threshold-linear Bayesian model was employed for both analyses: 1) DC and CFI and 2) DC and CS. Posterior means (SD) of additive covariance and corresponding genetic correlation between the DC and CFI were -0.62 d (0.19 d) and -0.66 (0.12), respectively. The corresponding point estimates between the DC and CS were -0.70 d (0.14 d) and -0.73 (0.06), respectively. These genetic correlations indicate a strong, negative relationship between DC and both measures of fertility in AI data. Selecting for animals with shorter DC intervals genetically will lead to correlated increases in both CS and CFI. Posterior means (SD) for additive and residual variance and heritability for DC for the DC-CFI analysis were 23.5 d2 (4.1 d2), 363.2 d2 (4.8 d2), and 0.06 (0.01), respectively. The corresponding parameter estimates for the DC-CS analysis were very similar. Posterior means (SD) for additive, herd-year and service sire variance and heritability for CFI were 0.04 (0.01), 0.06 (0.06), 0.14 (0.16), and 0.03 (0.01), respectively. Posterior means (SD) for additive, herd-year, and service sire variance and heritability for CS were 0.04 (0.01), 0.07 (0.07), 0.14 (0.16), and 0.03 (0.01), respectively. The similarity of the parameter estimates for CFI and CS suggest that either trait could be used as a measure of fertility in AI data. However, the definition of CFI allows the identification of animals that not only record a calving event, but calve to their first insemination, and the value of this trait would be even greater in a more complete dataset than that used in this study. The magnitude of the correlations between DC and CS-CFI suggest that

  5. Análise genética de escores de avaliação visual de bovinos com modelos bayesianos de limiar e linear Genetic analysis for visual scores of bovines with the linear and threshold bayesian models

    Carina Ubirajara de Faria

    2008-07-01

    Full Text Available O objetivo deste trabalho foi comparar as estimativas de parâmetros genéticos obtidas em análises bayesianas uni-característica e bi-característica, em modelo animal linear e de limiar, considerando-se as características categóricas morfológicas de bovinos da raça Nelore. Os dados de musculosidade, estrutura física e conformação foram obtidos entre 2000 e 2005, em 3.864 animais de 13 fazendas participantes do Programa Nelore Brasil. Foram realizadas análises bayesianas uni e bi-características, em modelos de limiar e linear. De modo geral, os modelos de limiar e linear foram eficientes na estimação dos parâmetros genéticos para escores visuais em análises bayesianas uni-características. Nas análises bi-características, observou-se que: com utilização de dados contínuos e categóricos, o modelo de limiar proporcionou estimativas de correlação genética de maior magnitude do que aquelas do modelo linear; e com o uso de dados categóricos, as estimativas de herdabilidade foram semelhantes. A vantagem do modelo linear foi o menor tempo gasto no processamento das análises. Na avaliação genética de animais para escores visuais, o uso do modelo de limiar ou linear não influenciou a classificação dos animais, quanto aos valores genéticos preditos, o que indica que ambos os modelos podem ser utilizados em programas de melhoramento genético.The objective of this work was to compare the estimates of genetic parameters obtained in single-trait and two-trait bayesian analyses, under linear and threshold animal models, considering categorical morphological traits of bovines of the Nelore breed. Data of musculature, physical structure and conformation were obtained between years 2000 and 2005, from 3,864 bovines of the Nelore breed from 13 participant farms of the Nelore Brazil Program. Single-trait and two-trait bayesian analyses were performed under linear and threshold animal models. In general, the linear and threshold

  6. Zseq: An Approach for Preprocessing Next-Generation Sequencing Data.

    Alkhateeb, Abedalrhman; Rueda, Luis

    2017-08-01

    Next-generation sequencing technology generates a huge number of reads (short sequences), which contain a vast amount of genomic data. The sequencing process, however, comes with artifacts. Preprocessing of sequences is mandatory for further downstream analysis. We present Zseq, a linear method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq algorithm is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Moreover, de novo assembled transcripts from the reads filtered by Zseq have longer genomic sequences than other tested methods. Estimating the threshold of the cutoff point is introduced using labeling rules with optimistic results.

  7. Performance Evaluation of Linear (ARMA and Threshold Nonlinear (TAR Time Series Models in Daily River Flow Modeling (Case Study: Upstream Basin Rivers of Zarrineh Roud Dam

    Farshad Fathian

    2017-01-01

    Full Text Available Introduction: Time series models are generally categorized as a data-driven method or mathematically-based method. These models are known as one of the most important tools in modeling and forecasting of hydrological processes, which are used to design and scientific management of water resources projects. On the other hand, a better understanding of the river flow process is vital for appropriate streamflow modeling and forecasting. One of the main concerns of hydrological time series modeling is whether the hydrologic variable is governed by the linear or nonlinear models through time. Although the linear time series models have been widely applied in hydrology research, there has been some recent increasing interest in the application of nonlinear time series approaches. The threshold autoregressive (TAR method is frequently applied in modeling the mean (first order moment of financial and economic time series. Thise type of the model has not received considerable attention yet from the hydrological community. The main purposes of this paper are to analyze and to discuss stochastic modeling of daily river flow time series of the study area using linear (such as ARMA: autoregressive integrated moving average and non-linear (such as two- and three- regime TAR models. Material and Methods: The study area has constituted itself of four sub-basins namely, Saghez Chai, Jighato Chai, Khorkhoreh Chai and Sarogh Chai from west to east, respectively, which discharge water into the Zarrineh Roud dam reservoir. River flow time series of 6 hydro-gauge stations located on upstream basin rivers of Zarrineh Roud dam (located in the southern part of Urmia Lake basin were considered to model purposes. All the data series used here to start from January 1, 1997, and ends until December 31, 2011. In this study, the daily river flow data from January 01 1997 to December 31 2009 (13 years were chosen for calibration and data for January 01 2010 to December 31 2011

  8. How well can we reconstruct the t anti t system near its threshold at future e sup + e sup - linear colliders?

    Ikematsu, K; Hioki, Z; Sumino, Y; Takahashi, T

    2003-01-01

    We developed a new method for the full kinematical reconstruction of the t anti t system near its threshold at future linear e sup + e sup - colliders. In the core of the method lies likelihood fitting which is designed to improve measurement accuracies of the kinematical variables that specify the final states resulting from t anti t decays. The improvement is demonstrated by applying this method to a Monte Carlo t anti t sample generated with various experimental effects including beamstrahlung, finite acceptance and resolution of the detector system, etc. In most cases the fit takes a broad non-Gaussian distribution of a given kinematical variable to a nearly Gaussian shape, thereby justifying phenomenological analyses based on simple Gaussian smearing of the parton-level momenta. The standard deviations of the resultant distributions of various kinematical variables are given in order to facilitate such phenomenological analyses. A possible application of the kinematical fitting method and its expected im...

  9. Response to, "On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith.".

    Beyea, Jan

    2016-07-01

    It is not true that successive groups of researchers from academia and research institutions-scientists who served on panels of the US National Academy of Sciences (NAS)-were duped into supporting a linear no-threshold model (LNT) by the opinions expressed in the genetic panel section of the 1956 "BEAR I" report. Successor reports had their own views of the LNT model, relying on mouse and human data, not fruit fly data. Nor was the 1956 report biased and corrupted, as has been charged in an article by Edward J. Calabrese in this journal. With or without BEAR I, the LNT model would likely have been accepted in the US for radiation protection purposes in the 1950's. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Surveys of radon levels in homes in the United States: A test of the linear-no-threshold dose-response relationship for radiation carcinogenesis

    Cohen, B.L.

    1987-01-01

    The University of Pittsburgh Radon Project for large scale measurements of radon concentrations in homes is described. Its principal research is to test the linear-no threshold dose-response relationship for radiation carcinogenesis by determining average radon levels in the 25 U.S. counties (within certain population ranges) with highest and lowest lung cancer rates. The theory predicts that the former should have about 3 times higher average radon levels than the latter, under the assumption that any correlation between exposure to radon and exposure to other causes of lung cancer is weak. The validity of this assumption is tested with data on average radon level vs replies to items on questionnaires; there is little correlation between radon levels in houses and smoking habits, educational attainment, or economic status of the occupants, or with urban vs rural environs which is an indicator of exposure to air pollution

  11. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    Beyea, Jan

    2017-01-01

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were

  12. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    Beyea, Jan, E-mail: jbeyea@cipi.com

    2017-04-15

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were

  13. Next-generation phylogenomics

    Chan Cheong Xin

    2013-01-01

    Full Text Available Abstract Thanks to advances in next-generation technologies, genome sequences are now being generated at breadth (e.g. across environments and depth (thousands of closely related strains, individuals or samples unimaginable only a few years ago. Phylogenomics – the study of evolutionary relationships based on comparative analysis of genome-scale data – has so far been developed as industrial-scale molecular phylogenetics, proceeding in the two classical steps: multiple alignment of homologous sequences, followed by inference of a tree (or multiple trees. However, the algorithms typically employed for these steps scale poorly with number of sequences, such that for an increasing number of problems, high-quality phylogenomic analysis is (or soon will be computationally infeasible. Moreover, next-generation data are often incomplete and error-prone, and analysis may be further complicated by genome rearrangement, gene fusion and deletion, lateral genetic transfer, and transcript variation. Here we argue that next-generation data require next-generation phylogenomics, including so-called alignment-free approaches. Reviewers Reviewed by Mr Alexander Panchin (nominated by Dr Mikhail Gelfand, Dr Eugene Koonin and Prof Peter Gogarten. For the full reviews, please go to the Reviewers’ comments section.

  14. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions

  15. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?

    Little, M.P.

    2010-01-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose-response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported

  16. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations).

    Beyea, Jan

    2017-04-01

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Problems in the radon versus lung cancer test of the linear no-threshold theory and a procedure for resolving them

    Cohen, B.L.

    1996-01-01

    It has been shown that lung cancer rates in U.S. counties, with or without correction for smoking, decrease with increasing radon exposure, in sharp contrast to the increase predicted by the linear-no-threshold (LNT) theory. The discrepancy is by 20 standard deviations, and very extensive efforts to explain it were not successful. Unless a plausible explanation for this discrepancy (or conflicting evidence) can be found, continued use of the LNT theory is a violation of open-quotes the scientific method.close quotes Nevertheless, LNT continues to be accepted and used by all official and governmental organizations, such as the International Commission on Radiological Protection, the National Council on Radiation Protection and Measurements, the Council on Radiation Protection and Measurements, the National Academy of Sciences - U.S. Nuclear Regulatory Commission Board of Radiation Effects Research, Environmental Protection Agency etc., and there has been no move by any of these bodies to discontinue or limit its use. Assuming that they rely on the scientific method, this clearly implies that they have a plausible explanation for the discrepancy. The author has made great efforts to discover these 'plausible explanations' by inquiries through various channels, and the purpose of this paper is to describe and discuss them

  18. Non-linear, connectivity and threshold-dominated runoff-generation controls DOC and heavy metal export in a small peat catchment

    Birkel, Christian; Broder, Tanja; Biester, Harald

    2017-04-01

    Peat soils act as important carbon sinks, but they also release large amounts of dissolved organic carbon (DOC) to the aquatic system. The DOC export is strongly tied to the export of soluble heavy metals. The accumulation of potentially toxic substances due to anthropogenic activities, and their natural export from peat soils to the aquatic system is an important health and environmental issue. However, limited knowledge exists as to how much of these substances are mobilized, how they are mobilized in terms of flow pathways and under which hydrometeorological conditions. In this study, we report from a combined experimental and modelling effort to provide greater process understanding from a small, lead (Pb) and arsenic (As) contaminated upland peat catchment in northwestern Germany. We developed a minimally parameterized, but process-based, coupled hydrology-biogeochemistry model applied to simulate detailed hydrometric and biogeochemical data. The model was based on an initial data mining analysis, in combination with regression relationships of discharge, DOC and element export. We assessed the internal model DOC-processing based on stream-DOC hysteresis patterns and 3-hourly time step groundwater level and soil DOC data (not used for calibration as an independent model test) for two consecutive summer periods in 2013 and 2014. We found that Pb and As mobilization can be efficiently predicted from DOC transport alone, but Pb showed a significant non-linear relationship with DOC, while As was linearly related to DOC. The relatively parsimonious model (nine calibrated parameters in total) showed the importance of non-linear and rapid near-surface runoff-generation mechanisms that caused around 60% of simulated DOC load. The total load was high even though these pathways were only activated during storm events on average 30% of the monitoring time - as also shown by the experimental data. Overall, the drier period 2013 resulted in increased nonlinearity, but

  19. Next-Generation Pathology.

    Caie, Peter D; Harrison, David J

    2016-01-01

    The field of pathology is rapidly transforming from a semiquantitative and empirical science toward a big data discipline. Large data sets from across multiple omics fields may now be extracted from a patient's tissue sample. Tissue is, however, complex, heterogeneous, and prone to artifact. A reductionist view of tissue and disease progression, which does not take this complexity into account, may lead to single biomarkers failing in clinical trials. The integration of standardized multi-omics big data and the retention of valuable information on spatial heterogeneity are imperative to model complex disease mechanisms. Mathematical modeling through systems pathology approaches is the ideal medium to distill the significant information from these large, multi-parametric, and hierarchical data sets. Systems pathology may also predict the dynamical response of disease progression or response to therapy regimens from a static tissue sample. Next-generation pathology will incorporate big data with systems medicine in order to personalize clinical practice for both prognostic and predictive patient care.

  20. The linear non threshold conception 'Dose-effect' as a base for standardization of human exposure to ionizing radiation. Arguments pro and con

    Vassilev, G.

    2000-01-01

    Examples and argument are presented for reconsidering of the application of the threshold conception in low dose risk assessment. Some of the reasons mentioned are: inapplicability of the quantity 'collective dose' for low doses; serious reassessment of risk coefficients for radiation mutagenesis; report on increasing data on the so called hormesis - stimulation and potential effects from exposure of test animals nas humans to low doses of ionizing radiation

  1. Critical dose threshold for TL dose response non-linearity: Dependence on the method of analysis: It’s not only the data

    Datz, H.; Horowitz, Y.S.; Oster, L.; Margaliot, M.

    2011-01-01

    It is demonstrated that the method of data analysis, i.e., the method of the phenomenological/theoretical interpretation of dose response data, can greatly influence the estimation of the onset of deviation from dose response linearity of the high temperature thermoluminescence in LiF:Mg,Ti (TLD-100).

  2. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  3. Growth, spectral, thermal, laser damage threshold, microhardness, dielectric, linear and nonlinear optical properties of an organic single crystal: L-phenylalanine DL-mandelic acid

    Jayaprakash, P. [PG & Research Department of Physics, Arignar Anna Govt. Arts College, Cheyyar 604 407, Tamil Nadu (India); Peer Mohamed, M. [PG & Research Department of Physics, Arignar Anna Govt. Arts College, Cheyyar 604 407, Tamil Nadu (India); Department of Physics, C. Abdul Hakeem College, Melvisharam 632 509, Tamil Nadu (India); Krishnan, P. [Department of Physics, St. Joseph’s College of Engineering, Chennai 600 119, Tamil Nadu (India); Nageshwari, M.; Mani, G. [PG & Research Department of Physics, Arignar Anna Govt. Arts College, Cheyyar 604 407, Tamil Nadu (India); Lydia Caroline, M., E-mail: lydiacaroline2006@yahoo.co.in [PG & Research Department of Physics, Arignar Anna Govt. Arts College, Cheyyar 604 407, Tamil Nadu (India)

    2016-12-15

    Single crystals of L-phenylalanine dl-mandelic acid [C{sub 9}H{sub 11}NO{sub 2}. C{sub 8}H{sub 8}O{sub 3}], have been grown by the slow evaporation technique at room temperature using aqueous solution. The single crystal XRD study confirms monoclinic system for the grown crystal. The functional groups present in the grown crystal have been identified by FTIR and FT-Raman analyses. The optical absorption studies show that the crystal is transparent in the visible region with a lower cut-off wavelength of 257 nm and the optical band gap energy E{sub g} is determined to be 4.62 eV. The Kurtz powder second harmonic generation was confirmed using Nd:YAG laser with fundamental wavelength of 1064 nm. Further, the thermal studies confirmed no weight loss up to 150°C for the as-grown crystal. The photoluminescence spectrum exhibited three peaks (414 nm, 519 nm, 568 nm) due to the donation of protons from carboxylic acid to amino group. Laser damage threshold value was found to be 4.98 GW/cm{sup 2}. The Vickers microhardness test was carried out on the grown crystals and there by Vickers hardness number (H{sub v}), work hardening coefficient (n), yield strength (σ{sub y}), stiffness constant C{sub 11} were evaluated. The dielectric behavior of the crystal has been determined in the frequency range 50 Hz–5 MHz at various temperatures.

  4. Emittance control in linear colliders

    Ruth, R.D.

    1991-01-01

    Before completing a realistic design of a next-generation linear collider, the authors must first learn the lessons taught by the first generation, the SLC. Given that, they must make designs fault tolerant by including correction and compensation in the basic design. They must also try to eliminate these faults by improved alignment and stability of components. When these two efforts cross, they have a realistic design. The techniques of generation and control of emittance reviewed here provide a foundation for a design which can obtain the necessary luminosity in a next-generation linear collider

  5. Next-Generation Sequencing Platforms

    Mardis, Elaine R.

    2013-06-01

    Automated DNA sequencing instruments embody an elegant interplay among chemistry, engineering, software, and molecular biology and have built upon Sanger's founding discovery of dideoxynucleotide sequencing to perform once-unfathomable tasks. Combined with innovative physical mapping approaches that helped to establish long-range relationships between cloned stretches of genomic DNA, fluorescent DNA sequencers produced reference genome sequences for model organisms and for the reference human genome. New types of sequencing instruments that permit amazing acceleration of data-collection rates for DNA sequencing have been developed. The ability to generate genome-scale data sets is now transforming the nature of biological inquiry. Here, I provide an historical perspective of the field, focusing on the fundamental developments that predated the advent of next-generation sequencing instruments and providing information about how these instruments work, their application to biological research, and the newest types of sequencers that can extract data from single DNA molecules.

  6. Next-Generation Tools For Next-Generation Surveys

    Murray, S. G.

    2017-04-01

    The next generation of large-scale galaxy surveys, across the electromagnetic spectrum, loom on the horizon as explosively game-changing datasets, in terms of our understanding of cosmology and structure formation. We are on the brink of a torrent of data that is set to both confirm and constrain current theories to an unprecedented level, and potentially overturn many of our conceptions. One of the great challenges of this forthcoming deluge is to extract maximal scientific content from the vast array of raw data. This challenge requires not only well-understood and robust physical models, but a commensurate network of software implementations with which to efficiently apply them. The halo model, a semi-analytic treatment of cosmological spatial statistics down to nonlinear scales, provides an excellent mathematical framework for exploring the nature of dark matter. This thesis presents a next-generation toolkit based on the halo model formalism, intended to fulfil the requirements of next-generation surveys. Our toolkit comprises three tools: (i) hmf, a comprehensive and flexible calculator for halo mass functions (HMFs) within extended Press-Schechter theory, (ii) the MRP distribution for extremely efficient analytic characterisation of HMFs, and (iii) halomod, an extension of hmf which provides support for the full range of halo model components. In addition to the development and technical presentation of these tools, we apply each to the task of physical modelling. With hmf, we determine the precision of our knowledge of the HMF, due to uncertainty in our knowledge of the cosmological parameters, over the past decade of cosmic microwave background (CMB) experiments. We place rule-of-thumb uncertainties on the predicted HMF for the Planck cosmology, and find that current limits on the precision are driven by modeling uncertainties rather than those from cosmological parameters. With the MRP, we create and test a method for robustly fitting the HMF to observed

  7. NOAA NEXt-Generation RADar (NEXRAD) Products

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of Level III weather radar products collected from Next-Generation Radar (NEXRAD) stations located in the contiguous United States, Alaska,...

  8. Galaxy LIMS for next-generation sequencing

    Scholtalbers, J.; Rossler, J.; Sorn, P.; Graaf, J. de; Boisguerin, V.; Castle, J.; Sahin, U.

    2013-01-01

    SUMMARY: We have developed a laboratory information management system (LIMS) for a next-generation sequencing (NGS) laboratory within the existing Galaxy platform. The system provides lab technicians standard and customizable sample information forms, barcoded submission forms, tracking of input

  9. Microbial production of next-generation stevia sweeteners

    Olsson, Kim; Carlsen, Simon; Semmler, Angelika

    2016-01-01

    BACKGROUND: The glucosyltransferase UGT76G1 from Stevia rebaudiana is a chameleon enzyme in the targeted biosynthesis of the next-generation premium stevia sweeteners, rebaudioside D (Reb D) and rebaudioside M (Reb M). These steviol glucosides carry five and six glucose units, respectively......, and have low sweetness thresholds, high maximum sweet intensities and exhibit a greatly reduced lingering bitter taste compared to stevioside and rebaudioside A, the most abundant steviol glucosides in the leaves of Stevia rebaudiana. RESULTS: In the metabolic glycosylation grid leading to production....... This screen made it possible to identify variants, such as UGT76G1Thr146Gly and UGT76G1His155Leu, which diminished accumulation of unwanted side-products and gave increased specific accumulation of the desired Reb D or Reb M sweeteners. This improvement in a key enzyme of the Stevia sweetener biosynthesis...

  10. Threshold current for fireball generation

    Dijkhuis, Geert C.

    1982-05-01

    Fireball generation from a high-intensity circuit breaker arc is interpreted here as a quantum-mechanical phenomenon caused by severe cooling of electrode material evaporating from contact surfaces. According to the proposed mechanism, quantum effects appear in the arc plasma when the radius of one magnetic flux quantum inside solid electrode material has shrunk to one London penetration length. A formula derived for the threshold discharge current preceding fireball generation is found compatible with data reported by Silberg. This formula predicts linear scaling of the threshold current with the circuit breaker's electrode radius and concentration of conduction electrons.

  11. Cluster cosmology with next-generation surveys.

    Ascaso, B.

    2017-03-01

    The advent of next-generation surveys will provide a large number of cluster detections that will serve the basis for constraining cos mological parameters using cluster counts. The main two observational ingredients needed are the cluster selection function and the calibration of the mass-observable relation. In this talk, we present the methodology designed to obtain robust predictions of both ingredients based on realistic cosmological simulations mimicking the following next-generation surveys: J-PAS, LSST and Euclid. We display recent results on the selection functions for these mentioned surveys together with others coming from other next-generation surveys such as eROSITA, ACTpol and SPTpol. We notice that the optical and IR surveys will reach the lowest masses between 0.3next-generation surveys and introduce very preliminary results.

  12. CARA Risk Assessment Thresholds

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  13. Estimativas de correlações genéticas entre escores visuais e características de carcaça medidas por ultrassonografia em bovinos Nelore utilizando modelos bayesianos linear-limiar Genetic correlation estimates between visual scores and carcass traits measured by ultrasound in Nelore cattle using linear-threshold bayesian models

    Carina Ubirajara de Faria

    2009-11-01

    Full Text Available O objetivo neste estudo foi estimar as correlações genéticas entre escores visuais e características de carcaça medidas por ultrassonografia em bovinos da raça Nelore utilizando a estatística bayesiana por meio da Amostragem de Gibbs, sob modelo animal linear-limiar. Foram estudadas as características categóricas morfológicas de musculosidade, estrutura física, conformação e sacro, avaliadas aos 15 e 22 meses de idade. Para as características de carcaça, foram avaliadas as características área de olho-de-lombo, espessura de gordura subcutânea, espessura de gordura subcutânea na garupa e altura na garupa. Os escores visuais devem ser empregados como critérios de seleção para aumentar o progresso genético para a característica área de olhode-lombo e, consequentemente, melhorar o rendimento de carcaça. As estimativas de correlação genética obtidas para musculosidade com espessura de gordura subcutânea e espessura de gordura subcutânea na garupa indicaram que a seleção para musculosidade pode levar a animais com melhor acabamento de carcaça. A seleção para a estrutura física e conformação aos 15 e 22 meses de idade pode promover resposta correlacionada para o aumento da altura na garupa.The objective of this study was to estimate the genetic correlations between visual scores and the carcass traits measured by ultrasound, in Nellore breed cattle, using the bayesian statistics by Gibbs Sampling, in the linear-threshold model. The morphological categorical traits of musculature, physical structure, conformation and sacrum were studied, evaluated at 15 and 22 months. The carcass traits of the longissimus muscle area, backfat thickness, rump fat thickness and hip height were evaluated. Visual scores should be used as selection criterion to increase genetic progress for the longissumus muscle area. The estimates of genetic correlations obtained between musculature and backfat thickness and rump fat thickness

  14. The oscillatory behavior of heated channels: an analysis of the density effect. Part I. The mechanism (non linear analysis). Part II. The oscillations thresholds (linearized analysis); Sur le comportement oscillatoire de canaux chauffes. - Etude theorique de l'effet de densite. 1ere partie: le mecanisme (analyse non lineaire), 2eme partie: seuils d'oscillation (analyse lineaire)

    Boure, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Grenoble, 38 (France)

    1967-07-01

    The problem of the oscillatory behavior of heated channels is presented in terms of delay-times and a density effect model is proposed to explain the behavior. The density effect is the consequence of the physical relationship between enthalpy and density of the fluid. In the first part non-linear equations are derived from the model in a dimensionless form. A description of the mechanism of oscillations is given, based on the analysis of the equations. An inventory of the governing parameters is established. At this point of the study, some facts in agreement with the experiments can be pointed out. In the second part the start of the oscillatory behavior of heated channels is studied in terms of the density effect. The threshold equations are derived, after linearization of the equations obtained in Part I. They can be solved rigorously by numerical methods to yield: -1) a relation between the describing parameters at the onset of oscillations, and -2) the frequency of the oscillations. By comparing the results predicted by the model to the experimental behavior of actual systems, the density effect is very often shown to be the actual cause of oscillatory behaviors. (author) [French] Premiere partie: mecanisme (equations non linearisees). On expose le probleme du comportement oscillatoire des canaux chauffes en mettant l'accent sur la presence de retards dans le systeme et on propose un modele a 'effet de densite' pour expliquer ce comportement. L'effet de densite est la consequence de la relation physique entre l'enthalpie et la masse volumique du fluide. Les equations non lineaires du schema mathematique correspondant sont etablies et mises sous forme adimensionnelle. L'analyse de ces equations conduit a une description du mecanisme des oscillations. On donne la liste des parametres dont depend le comportement du modele. A ce stade de l'etude, on peut deja relever dans ce comportement plusieurs faits conformes a l'experience. Deuxieme partie: seuils d

  15. Threshold quantum cryptography

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding

  16. Detection thresholds of macaque otolith afferents.

    Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E

    2012-06-13

    The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.

  17. Theory of threshold phenomena

    Hategan, Cornel

    2002-01-01

    Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)

  18. Reaction thresholds in doubly special relativity

    Heyman, Daniel; Major, Seth; Hinteleitner, Franz

    2004-01-01

    Two theories of special relativity with an additional invariant scale, 'doubly special relativity', are tested with calculations of particle process kinematics. Using the Judes-Visser modified conservation laws, thresholds are studied in both theories. In contrast with some linear approximations, which allow for particle processes forbidden in special relativity, both the Amelino-Camelia and Magueijo-Smolin frameworks allow no additional processes. To first order, the Amelino-Camelia framework thresholds are lowered and the Magueijo-Smolin framework thresholds may be raised or lowered

  19. Threshold Signature Schemes Application

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  20. Particles near threshold

    Bhattacharya, T.; Willenbrock, S.

    1993-01-01

    We propose returning to the definition of the width of a particle in terms of the pole in the particle's propagator. Away from thresholds, this definition of width is equivalent to the standard perturbative definition, up to next-to-leading order; however, near a threshold, the two definitions differ significantly. The width as defined by the pole position provides more information in the threshold region than the standard perturbative definition and, in contrast with the perturbative definition, does not vanish when a two-particle s-wave threshold is approached from below

  1. Microbial production of next-generation stevia sweeteners.

    Olsson, Kim; Carlsen, Simon; Semmler, Angelika; Simón, Ernesto; Mikkelsen, Michael Dalgaard; Møller, Birger Lindberg

    2016-12-07

    The glucosyltransferase UGT76G1 from Stevia rebaudiana is a chameleon enzyme in the targeted biosynthesis of the next-generation premium stevia sweeteners, rebaudioside D (Reb D) and rebaudioside M (Reb M). These steviol glucosides carry five and six glucose units, respectively, and have low sweetness thresholds, high maximum sweet intensities and exhibit a greatly reduced lingering bitter taste compared to stevioside and rebaudioside A, the most abundant steviol glucosides in the leaves of Stevia rebaudiana. In the metabolic glycosylation grid leading to production of Reb D and Reb M, UGT76G1 was found to catalyze eight different reactions all involving 1,3-glucosylation of steviol C 13 - and C 19 -bound glucoses. Four of these reactions lead to Reb D and Reb M while the other four result in formation of side-products unwanted for production. In this work, side-product formation was reduced by targeted optimization of UGT76G1 towards 1,3 glucosylation of steviol glucosides that are already 1,2-diglucosylated. The optimization of UGT76G1 was based on homology modelling, which enabled identification of key target amino acids present in the substrate-binding pocket. These residues were then subjected to site-saturation mutagenesis and a mutant library containing a total of 1748 UGT76G1 variants was screened for increased accumulation of Reb D or M, as well as for decreased accumulation of side-products. This screen was performed in a Saccharomyces cerevisiae strain expressing all enzymes in the rebaudioside biosynthesis pathway except for UGT76G1. Screening of the mutant library identified mutations with positive impact on the accumulation of Reb D and Reb M. The effect of the introduced mutations on other reactions in the metabolic grid was characterized. This screen made it possible to identify variants, such as UGT76G1 Thr146Gly and UGT76G1 His155Leu , which diminished accumulation of unwanted side-products and gave increased specific accumulation of the desired

  2. CERN balances linear collider studies

    ILC Newsline

    2011-01-01

    The forces behind the two most mature proposals for a next-generation collider, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) study, have been steadily coming together, with scientists from both communities sharing ideas and information across the technology divide. In a support of cooperation between the two, CERN in Switzerland, where most CLIC research takes place, recently converted the project-specific position of CLIC Study Leader to the concept-based Linear Collider Study Leader.   The scientist who now holds this position, Steinar Stapnes, is charged with making the linear collider a viable option for CERN’s future, one that could include either CLIC or the ILC. The transition to more involve the ILC must be gradual, he said, and the redefinition of his post is a good start. Though not very much involved with superconducting radiofrequency (SRF) technology, where ILC researchers have made significant advances, CERN participates in many aspect...

  3. Tailoring next-generation biofuels and their combustion in next-generation engines

    Gladden, John Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wu, Weihua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Taatjes, Craig A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scheer, Adam Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turner, Kevin M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Yu, Eizadora T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); O' Bryan, Greg [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Powell, Amy Jo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gao, Connie W. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2013-11-01

    Increasing energy costs, the dependence on foreign oil supplies, and environmental concerns have emphasized the need to produce sustainable renewable fuels and chemicals. The strategy for producing next-generation biofuels must include efficient processes for biomass conversion to liquid fuels and the fuels must be compatible with current and future engines. Unfortunately, biofuel development generally takes place without any consideration of combustion characteristics, and combustion scientists typically measure biofuels properties without any feedback to the production design. We seek to optimize the fuel/engine system by bringing combustion performance, specifically for advanced next-generation engines, into the development of novel biosynthetic fuel pathways. Here we report an innovative coupling of combustion chemistry, from fundamentals to engine measurements, to the optimization of fuel production using metabolic engineering. We have established the necessary connections among the fundamental chemistry, engine science, and synthetic biology for fuel production, building a powerful framework for co-development of engines and biofuels.

  4. Melanin microcavitation threshold in the near infrared

    Schmidt, Morgan S.; Kennedy, Paul K.; Vincelette, Rebecca L.; Schuster, Kurt J.; Noojin, Gary D.; Wharmby, Andrew W.; Thomas, Robert J.; Rockwell, Benjamin A.

    2014-02-01

    Thresholds for microcavitation of isolated bovine and porcine melanosomes were determined using single nanosecond (ns) laser pulses in the NIR (1000 - 1319 nm) wavelength regime. Average fluence thresholds for microcavitation increased non-linearly with increasing wavelength. Average fluence thresholds were also measured for 10-ns pulses at 532 nm, and found to be comparable to visible ns pulse values published in previous reports. Fluence thresholds were used to calculate melanosome absorption coefficients, which decreased with increasing wavelength. This trend was found to be comparable to the decrease in retinal pigmented epithelial (RPE) layer absorption coefficients reported over the same wavelength region. Estimated corneal total intraocular energy (TIE) values were determined and compared to the current and proposed maximum permissible exposure (MPE) safe exposure levels. Results from this study support the proposed changes to the MPE levels.

  5. Double Photoionization Near Threshold

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  6. Thresholds in radiobiology

    Katz, R.; Hofmann, W.

    1982-01-01

    Interpretations of biological radiation effects frequently use the word 'threshold'. The meaning of this word is explored together with its relationship to the fundamental character of radiation effects and to the question of perception. It is emphasised that although the existence of either a dose or an LET threshold can never be settled by experimental radiobiological investigations, it may be argued on fundamental statistical grounds that for all statistical processes, and especially where the number of observed events is small, the concept of a threshold is logically invalid. (U.K.)

  7. Regional Seismic Threshold Monitoring

    Kvaerna, Tormod

    2006-01-01

    ... model to be used for predicting the travel times of regional phases. We have applied these attenuation relations to develop and assess a regional threshold monitoring scheme for selected subregions of the European Arctic...

  8. Linear algebra

    Shilov, Georgi E

    1977-01-01

    Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

  9. Threshold guidance update

    Wickham, L.E.

    1986-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Last years' activities (1984) included the development of a threshold guidance dose, the development of threshold concentrations corresponding to the guidance dose, the development of supporting documentation, review by a technical peer review committee, and review by the DOE community. As a result of the comments, areas have been identified for more extensive analysis, including an alternative basis for selection of the guidance dose and the development of quality assurance guidelines. Development of quality assurance guidelines will provide a reasonable basis for determining that a given waste stream qualifies as a threshold waste stream and can then be the basis for a more extensive cost-benefit analysis. The threshold guidance and supporting documentation will be revised, based on the comments received. The revised documents will be provided to DOE by early November. DOE-HQ has indicated that the revised documents will be available for review by DOE field offices and their contractors

  10. Near threshold fatigue testing

    Freeman, D. C.; Strum, M. J.

    1993-01-01

    Measurement of the near-threshold fatigue crack growth rate (FCGR) behavior provides a basis for the design and evaluation of components subjected to high cycle fatigue. Typically, the near-threshold fatigue regime describes crack growth rates below approximately 10(exp -5) mm/cycle (4 x 10(exp -7) inch/cycle). One such evaluation was recently performed for the binary alloy U-6Nb. The procedures developed for this evaluation are described in detail to provide a general test method for near-threshold FCGR testing. In particular, techniques for high-resolution measurements of crack length performed in-situ through a direct current, potential drop (DCPD) apparatus, and a method which eliminates crack closure effects through the use of loading cycles with constant maximum stress intensity are described.

  11. Next-generation wireless technologies 4G and beyond

    Chilamkurti, Naveen; Chaouchi, Hakima

    2013-01-01

    This comprehensive text/reference examines the various challenges to secure, efficient and cost-effective next-generation wireless networking. Topics and features: presents the latest advances, standards and technical challenges in a broad range of emerging wireless technologies; discusses cooperative and mesh networks, delay tolerant networks, and other next-generation networks such as LTE; examines real-world applications of vehicular communications, broadband wireless technologies, RFID technology, and energy-efficient wireless communications; introduces developments towards the 'Internet o

  12. Risk thresholds for alcohol consumption

    Wood, Angela M; Kaptoge, Stephen; Butterworth, Adam S

    2018-01-01

    previous cardiovascular disease. METHODS: We did a combined analysis of individual-participant data from three large-scale data sources in 19 high-income countries (the Emerging Risk Factors Collaboration, EPIC-CVD, and the UK Biobank). We characterised dose-response associations and calculated hazard......BACKGROUND: Low-risk limits recommended for alcohol consumption vary substantially across different national guidelines. To define thresholds associated with lowest risk for all-cause mortality and cardiovascular disease, we studied individual-participant data from 599 912 current drinkers without......·4 million person-years of follow-up. For all-cause mortality, we recorded a positive and curvilinear association with the level of alcohol consumption, with the minimum mortality risk around or below 100 g per week. Alcohol consumption was roughly linearly associated with a higher risk of stroke (HR per 100...

  13. Linear Covariance Analysis for a Lunar Lander

    Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael

    2017-01-01

    A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.

  14. Introducing Linear Functions: An Alternative Statistical Approach

    Nolan, Caroline; Herbert, Sandra

    2015-01-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…

  15. Threshold factorization redux

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  16. Elaborating on Threshold Concepts

    Rountree, Janet; Robins, Anthony; Rountree, Nathan

    2013-01-01

    We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account…

  17. Next-generation healthcare: a strategic appraisal.

    Montague, Terrence

    2009-01-01

    Successful next-generation healthcare must deliver timely access and quality for an aging population, while simultaneously promoting disease prevention and managing costs. The key factors for sustained success are a culture with aligned goals and values; coordinated team care that especially engages with physicians and patients; practical information that is collected and communicated reliably; and education in the theory and methods of collaboration, measurement and leadership. Currently, optimal population health is challenged by a high prevalence of chronic disease, with large gaps between best and usual care, a scarcity of health human resources - particularly with the skills, attitudes and training for coordinated team care - and the absence of flexible, reliable clinical measurement systems. However, to make things better, institutional models and supporting technologies are available. In the short term, a first step is to enhance the awareness of the practical opportunities to improve, including the expansion of proven community-based disease management programs that communicate knowledge, competencies and clinical measurements among professional and patient partners, leading to reduced care gaps and improved clinical and economic outcomes. Longer-term success requires two additional steps. One is formal inter-professional training to provide, on an ongoing basis, the polyvalent human resource skills and foster the culture of working with others to improve the care of whole populations. The other is the adoption of reliable information systems, including electronic health records, to allow useful and timely measurement and effective communication of clinical information in real-world settings. A better health future can commence immediately, within existing resources, and be sustained with feasible innovations in provider and patient education and information systems. The future is now.

  18. Psychophysical thresholds of face visibility during infancy

    Gelskov, Sofie; Kouider, Sid

    2010-01-01

    The ability to detect and focus on faces is a fundamental prerequisite for developing social skills. But how well can infants detect faces? Here, we address this question by studying the minimum duration at which faces must appear to trigger a behavioral response in infants. We used a preferential...... looking method in conjunction with masking and brief presentations (300 ms and below) to establish the temporal thresholds of visibility at different stages of development. We found that 5 and 10 month-old infants have remarkably similar visibility thresholds about three times higher than those of adults....... By contrast, 15 month-olds not only revealed adult-like thresholds, but also improved their performance through memory-based strategies. Our results imply that the development of face visibility follows a non-linear course and is determined by a radical improvement occurring between 10 and 15 months....

  19. Thresholds of ion turbulence in tokamaks

    Garbet, X.; Laurent, L.; Mourgues, F.; Roubin, J.P.; Samain, A.; Zou, X.L.

    1991-01-01

    The linear thresholds of ionic turbulence are numerically calculated for the Tokamaks JET and TORE SUPRA. It is proved that the stability domain at η i >0 is determined by trapped ion modes and is characterized by η i ≥1 and a threshold L Ti /R of order (0.2/0.3)/(1+T i /T e ). The latter value is significantly smaller than what has been previously predicted. Experimental temperature profiles in heated discharges are usually marginal with respect to this criterium. It is also shown that the eigenmodes are low frequency, low wavenumber ballooned modes, which may produce a very large transport once the threshold ion temperature gradient is reached

  20. Linear gate

    Suwono.

    1978-01-01

    A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)

  1. Linear Accelerators

    Vretenar, M

    2014-01-01

    The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics

  2. Hadron production near threshold

    Abstract. Final state interaction effects in pp → pΛK+ and pd → 3He η reactions are explored near threshold to study the sensitivity of the cross-sections to the pΛ potential and the ηN scattering matrix. The final state scattering wave functions between Λ and p and η and 3He are described rigorously. The Λ production is ...

  3. Casualties and threshold effects

    Mays, C.W.; National Cancer Inst., Bethesda

    1988-01-01

    Radiation effects like cancer are denoted as casualties. Other radiation effects occur almost in everyone when the radiation dose is sufficiently high. One then speaks of radiation effects with a threshold dose. In this article the author puts his doubt about this classification of radiation effects. He argues that some effects of exposure to radiation do not fit in this classification. (H.W.). 19 refs.; 2 figs.; 1 tab

  4. Resonance phenomena near thresholds

    Persson, E.; Mueller, M.; Rotter, I.; Technische Univ. Dresden

    1995-12-01

    The trapping effect is investigated close to the elastic threshold. The nucleus is described as an open quantum mechanical many-body system embedded in the continuum of decay channels. An ensemble of compound nucleus states with both discrete and resonance states is investigated in an energy-dependent formalism. It is shown that the discrete states can trap the resonance ones and also that the discrete states can directly influence the scattering cross section. (orig.)

  5. Next-generation mid-infrared sources

    Jung, D.; Bank, S.; Lee, M. L.; Wasserman, D.

    2017-12-01

    to provide a survey of the current state of the art for mid-IR sources, but instead looks primarily to provide a picture of potential next-generation optical and optoelectronic materials systems for mid-IR light generation.

  6. Next-Generation Multifunctional Electrochromic Devices.

    Cai, Guofa; Wang, Jiangxin; Lee, Pooi See

    2016-08-16

    during the daytime. Energy can also be stored in the smart windows during the daytime simultaneously and be discharged for use in the evening. These results reveal that the electrochromic devices have potential applications in a wide range of areas. We hope that this Account will promote further efforts toward fundamental research on electrochromic materials and the development of new multifunctional electrochromic devices to meet the growing demands for next-generation electronic systems.

  7. Linearization Method and Linear Complexity

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  8. Parallel beam dynamics simulation of linear accelerators

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  9. Data Compression with Linear Algebra

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  10. Linear algebra

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  11. Regional rainfall thresholds for landslide occurrence using a centenary database

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia

    2018-04-01

    This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.

  12. Next-generation probabilistic seismicity forecasting

    Hiemer, S.

    2014-07-01

    novel automated method to investigate the significance of spatial b-value variations. The method incorporates an objective data-driven partitioning scheme, which is based on penalized likelihood estimates. These well-defined criteria avoid the difficult choice of commonly applied spatial mapping parameters, such as grid spacing or size of mapping radii. We construct a seismicity forecast that includes spatial b-value variations and demonstrate our model’s skill and reliability when applied to data from California. All proposed probabilistic seismicity forecasts were subjected to evaluation methods using state of the art algorithms provided by the 'Collaboratory for the Study of Earthquake Predictability' infrastructure. First, we evaluated the statistical agreement between the forecasted and observed rates of target events in terms of number, space and magnitude. Secondly, we assessed the performance of one forecast relative to another. We find that the forecasts presented in this thesis are reliable and show significant skills with respect to established classical forecasts. These next-generation probabilistic seismicity forecasts can thus provide hazard information that are potentially useful in reducing earthquake losses and enhancing community preparedness and resilience. (author)

  13. Next-generation probabilistic seismicity forecasting

    Hiemer, S.

    2014-01-01

    novel automated method to investigate the significance of spatial b-value variations. The method incorporates an objective data-driven partitioning scheme, which is based on penalized likelihood estimates. These well-defined criteria avoid the difficult choice of commonly applied spatial mapping parameters, such as grid spacing or size of mapping radii. We construct a seismicity forecast that includes spatial b-value variations and demonstrate our model’s skill and reliability when applied to data from California. All proposed probabilistic seismicity forecasts were subjected to evaluation methods using state of the art algorithms provided by the 'Collaboratory for the Study of Earthquake Predictability' infrastructure. First, we evaluated the statistical agreement between the forecasted and observed rates of target events in terms of number, space and magnitude. Secondly, we assessed the performance of one forecast relative to another. We find that the forecasts presented in this thesis are reliable and show significant skills with respect to established classical forecasts. These next-generation probabilistic seismicity forecasts can thus provide hazard information that are potentially useful in reducing earthquake losses and enhancing community preparedness and resilience. (author)

  14. Linear algebra

    Stoll, R R

    1968-01-01

    Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand

  15. Building next-generation converged networks theory and practice

    Pathan, Al-Sakib Khan

    2013-01-01

    Supplying a comprehensive introduction to next-generation networks, Building Next-Generation Converged Networks: Theory and Practice strikes a balance between how and why things work and how to make them work. It compiles recent advancements along with basic issues from the wide range of fields related to next generation networks. Containing the contributions of 56 industry experts and researchers from 16 different countries, the book presents relevant theoretical frameworks and the latest research. It investigates new technologies such as IPv6 over Low Power Wireless Personal Area Network (6L

  16. Linear programming

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  17. Linear algebra

    Liesen, Jörg

    2015-01-01

    This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...

  18. Linear algebra

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  19. Linear Models

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  20. Intermediate structure and threshold phenomena

    Hategan, Cornel

    2004-01-01

    The Intermediate Structure, evidenced through microstructures of the neutron strength function, is reflected in open reaction channels as fluctuations in excitation function of nuclear threshold effects. The intermediate state supporting both neutron strength function and nuclear threshold effect is a micro-giant neutron threshold state. (author)

  1. LINEAR ACCELERATOR

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  2. Coloring geographical threshold graphs

    Bradonjic, Milan [Los Alamos National Laboratory; Percus, Allon [Los Alamos National Laboratory; Muller, Tobias [EINDHOVEN UNIV. OF TECH

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  3. Linear Collider Physics Resource Book for Snowmass 2001, 2 Higgs and Supersymmetry Studies

    Abe, T.; Asner, David Mark; Baer, H.; Bagger, Jonathan A.; Balazs, Csaba; Baltay, C.; Barker, T.; Barklow, T.; Barron, J.; Baur, Ulrich J.; Beach, R.; Bellwied, R.; Bigi, Ikaros I.Y.; Blochinger, C.; Boege, S.; Bolton, T.; Bower, G.; Brau, James E.; Breidenbach, Martin; Brodsky, Stanley J.; Burke, David L.; Burrows, Philip N.; Butler, Joel N.; Chakraborty, Dhiman; Cheng, Hsin-Chia; Chertok, Maxwell Benjamin; Choi, Seong-Youl; Cinabro, David; Corcella, Gennaro; Cordero, R.K.; Danielson, N.; Davoudiasl, Hooman; Dawson, S.; Denner, Ansgar; Derwent, P.; Diaz, Marco Aurelio; Dima, M.; Dittmaier, Stefan; Dixit, M.; Dixon, Lance J.; Dobrescu, Bogdan A.; Doncheski, M.A.; Duckwitz, M.; Dunn, J.; Early, J.; Erler, Jens; Feng, Jonathan L.; Ferretti, C.; Fisk, H.Eugene; Fraas, H.; Freitas, A.; Frey, R.; Gerdes, David W.; Gibbons, L.; Godbole, R.; Godfrey, S.; Goodman, E.; Gopalakrishna, Shrihari; Graf, N.; Grannis, Paul D.; Gronberg, Jeffrey Baton; Gunion, John F.; Haber, Howard E.; Han, Tao; Hawkings, Richard; Hearty, Christopher; Heinemeyer, Sven; Hertzbach, Stanley S.; Heusch, Clemens A.; Hewett, JoAnne L.; Hikasa, K.; Hiller, G.; Hoang, Andre H.; Hollebeek, Robert; Iwasaki, M.; Jacobsen, Robert Gibbs; Jaros, John Alan; Juste, A.; Kadyk, John A.; Kalinowski, J.; Kalyniak, P.; Kamon, Teruki; Karlen, Dean; Keller, L; Koltick, D.; Kribs, Graham D.; Kronfeld, Andreas Samuel; Leike, A.; Logan, Heather E.; Lykken, Joseph D.; Macesanu, Cosmin; Magill, Stephen R.; Marciano, William Joseph; Markiewicz, Thomas W.; Martin, S.; Maruyama, T.; Matchev, Konstantin Tzvetanov; Monig, Klaus; Montgomery, Hugh E.; Moortgat-Pick, Gudrid A.; Moreau, G.; Mrenna, Stephen; Murakami, Brandon; Murayama, Hitoshi; Nauenberg, Uriel; Neal, H.; Newman, B.; Nojiri, Mihoko M.; Orr, Lynne H.; Paige, F.; Para, A.; Pathak, S.; Peskin, Michael E.; Plehn, Tilman; Porter, F.; Potter, C.; Prescott, C.; Rainwater, David Landry; Raubenheimer, Tor O.; Repond, J.; Riles, Keith; Rizzo, Thomas Gerard; Ronan, Michael T.; Rosenberg, L.; Rosner, Jonathan L.; Roth, M.; Rowson, Peter C.; Schumm, Bruce Andrew; Seppala, L.; Seryi, Andrei; Siegrist, J.; Sinev, N.; Skulina, K.; Sterner, K.L.; Stewart, I.; Su, S.; Tata, Xerxes Ramyar; Telnov, Valery I.; Teubner, Thomas; Tkaczyk, S.; Turcot, Andre S.; van Bibber, Karl A.; Van Kooten, Rick J.; Vega, R.; Wackeroth, Doreen; Wagner, D.; Waite, Anthony P.; Walkowiak, Wolfgang; Weiglein, Georg; Wells, James Daniel; Wester, William Carl, III; Williams, B.; Wilson, G.; Wilson, R.; Winn, D.; Woods, M.; Wudka, J.; Yakovlev, Oleg I.; Yamamoto, H.; Yang, Hai Jun

    2001-01-01

    This Resource Book reviews the physics opportunities of a next-generation e+e- linear collider and discusses options for the experimental program. Part 2 reviews the possible experiments on Higgs bosons and supersymmetric particles that can be done at a linear collider.

  4. HLA typing: Conventional techniques v. next-generation sequencing ...

    Background. The large number of population-specific polymorphisms present in the HLA complex in the South African (SA) population reduces the probability of finding an adequate HLA-matched donor for individuals in need of an unrelated haematopoietic stem cell transplantation (HSCT). Next-generation sequencing ...

  5. Targeted enrichment strategies for next-generation plant biology

    Richard Cronn; Brian J. Knaus; Aaron Liston; Peter J. Maughan; Matthew Parks; John V. Syring; Joshua. Udall

    2012-01-01

    The dramatic advances offered by modem DNA sequencers continue to redefine the limits of what can be accomplished in comparative plant biology. Even with recent achievements, however, plant genomes present obstacles that can make it difficult to execute large-scale population and phylogenetic studies on next-generation sequencing platforms. Factors like large genome...

  6. Design Principles of Next-Generation Digital Gaming for Education.

    Squire, Kurt; Jenkins, Henry; Holland, Walter; Miller, Heather; O'Driscoll, Alice; Tan, Katie Philip; Todd, Katie.

    2003-01-01

    Discusses the rapid growth of digital games, describes research at MIT that is exploring the potential of digital games for supporting learning, and offers hypotheses about the design of next-generation educational video and computer games. Highlights include simulations and games; and design principles, including context and using information to…

  7. ERP II: Next-generation Extended Enterprise Resource Planning

    Møller, Charles

    2004-01-01

    ERP II (ERP/2) systems is a new concept introduced by Gartner Group in 2000 in order to label the latest extensions of the ERP-systems. The purpose of this paper is to explore the next-generation of ERP systems, the Extended Enterprise Resource Planning (EERP or as we prefer to use: e...... impact on extended enterprise architecture.....

  8. ERP II - Next-generation Extended Enterprise Resource Planning

    Møller, Charles

    2003-01-01

    ERP II (ERP/2) systems is a new concept introduced by Gartner Group in 2000 in order to label the latest extensions of the ERP-systems. The purpose of this paper is to explore the next-generation of ERP systems, the Extended Enterprise Resource Planning (EERP or as we prefer to use: e...... impact on extended enterprise architecture....

  9. Next-generation sequencing approaches to understanding the oral microbiome

    Zaura, E.

    2012-01-01

    Until recently, the focus in dental research has been on studying a small fraction of the oral microbiome—so-called opportunistic pathogens. With the advent of next-generation sequencing (NGS) technologies, researchers now have the tools that allow for profiling of the microbiomes and metagenomes at

  10. Effect of threshold quantization in opportunistic splitting algorithm

    Nam, Haewoon

    2011-12-01

    This paper discusses algorithms to find the optimal threshold and also investigates the impact of threshold quantization on the scheduling outage performance of the opportunistic splitting scheduling algorithm. Since this algorithm aims at finding the user with the highest channel quality within the minimal number of mini-slots by adjusting the threshold every mini-slot, optimizing the threshold is of paramount importance. Hence, in this paper we first discuss how to compute the optimal threshold along with two tight approximations for the optimal threshold. Closed-form expressions are provided for those approximations for simple calculations. Then, we consider linear quantization of the threshold to take the limited number of bits for signaling messages in practical systems into consideration. Due to the limited granularity for the quantized threshold value, an irreducible scheduling outage floor is observed. The numerical results show that the two approximations offer lower scheduling outage probability floors compared to the conventional algorithm when the threshold is quantized. © 2006 IEEE.

  11. Crossing the Petawatt threshold

    Perry, M.

    1996-01-01

    A revolutionary new laser called the Petawatt, developed by Lawrence Livermore researchers after an intensive three-year development effort, has produced more than 1,000 trillion (open-quotes petaclose quotes) watts of power, a world record. By crossing the petawatt threshold, the extraordinarily powerful laser heralds a new age in laser research. Lasers that provide a petawatt of power or more in a picosecond may make it possible to achieve fusion using significantly less energy than currently envisioned, through a novel Livermore concept called open-quotes fast ignition.close quotes The petawatt laser will also enable researchers to study the fundamental properties of matter, thereby aiding the Department of Energy's Stockpile Stewardship efforts and opening entirely new physical regimes to study. The technology developed for the Petawatt has also provided several spinoff technologies, including a new approach to laser material processing

  12. The Non-linear Impact of Advertising Investment on Market Performance for Pharmaceutical Enterprises--An Empirical Study Based on Threshold Regression Model%医药企业广告投放对市场绩效的非线性影响--基于门限回归模型的实证分析

    赵琳; 傅联英; 陈波

    2014-01-01

    Advertising is not an uncommon tool in non-price competition strategy pool and it affects the market performance non-linearly ( Ishigaki,2000 ) . This paper employs Non-dynamic Panel Threshold Regression Model to investigate the non-linear relationship between advertising and market performance in pharmaceutical industry. The empirical result finds strong evidence of an inverted U-shape relationship between them and identifies significant threshold effect. Specifical y, advertising investment significantly promotes the profit of pharmaceutical enterprises where the intensity of advertising fal s within somewhere between 0 and 0.0491;while it significantly discourages the profit of pharmaceutical enterprises if the intensity of advertising goes beyond 0.0491. The marginal effect of advertising on profit decreases and thus the optimal intensity of advertising for pharmaceutical industry lies in 0.0491. Further, pharmaceutical enterprises, with a proportion of 4.7% in full sample, over-advertise during the observation period. However, large pharmaceutical enterprises, with an amazing proportion of 27.3%in subsample grouped by scale, over-advertise. Small and medium pharmaceutical enterprises over-advertise as wel but with a lower proportion. Conclusions are beneficial to pharmaceutical enterprises in China and some recommendations are offered.%广告竞争是一种常用的非价格竞争手段,其投放强度对市场绩效的影响呈现出非线性特征(Ishigaki,2000)。本文以医药产业为例,运用面板数据门限回归模型实证检验了广告投放和企业利润之间的非线性关系,发现广告投放对企业利润的影响呈现出倒 U 型结构并存在显著的“门限效应”。具体地,若广告投放强度位于[0,0.049,1]之间,广告投放量增加能显著提升药企利润;当广告投放强度超过0.049,1时,广告投放量增加则会降低药企利润;医药生产企业最优广告投入强度为0.049,1

  13. Linear regression

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  14. Linear Colliders

    Alcaraz, J.

    2001-01-01

    After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs

  15. Linear algebra

    Edwards, Harold M

    1995-01-01

    In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject

  16. Social psychological approach to the problem of threshold

    Nakayachi, Kazuya

    1999-01-01

    This paper discusses the threshold of carcinogen risk from the viewpoint of social psychology. First, the results of a survey suggesting that renunciation of the Linear No-Threshold (LNT) hypothesis would have no influence on the public acceptance (PA) of nuclear power plants are reported. Second, the relationship between the adoption of the LNT hypothesis and the standardization of management for various risks are discussed. (author)

  17. 76 FR 49776 - The Development and Evaluation of Next-Generation Smallpox Vaccines; Public Workshop

    2011-08-11

    ...] The Development and Evaluation of Next-Generation Smallpox Vaccines; Public Workshop AGENCY: Food and... Evaluation of Next-Generation Smallpox Vaccines.'' The purpose of the public workshop is to identify and discuss the key issues related to the development and evaluation of next-generation smallpox vaccines. The...

  18. Association testing for next-generation sequencing data using score statistics

    Skotte, Line; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    computationally feasible due to the use of score statistics. As part of the joint likelihood, we model the distribution of the phenotypes using a generalized linear model framework, which works for both quantitative and discrete phenotypes. Thus, the method presented here is applicable to case-control studies...... of genotype calls into account have been proposed; most require numerical optimization which for large-scale data is not always computationally feasible. We show that using a score statistic for the joint likelihood of observed phenotypes and observed sequencing data provides an attractive approach...... to association testing for next-generation sequencing data. The joint model accounts for the genotype classification uncertainty via the posterior probabilities of the genotypes given the observed sequencing data, which gives the approach higher power than methods based on called genotypes. This strategy remains...

  19. Field-Distortion Air-Insulated Switches for Next-Generation Pulsed-Power Accelerators

    Wisher, Matthew Louis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johns, Owen M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Breden, Eric Wayne [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Calhoun, Jacob Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gruner, Frederick Rusticus [Kinetech LLC, Cedar Crest, NM (United States); Hohlfelder, Robert James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mulville, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Muron, David J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stoltzfus, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stygar, William A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    We have developed two advanced designs of a field-distortion air-insulated spark-gap switch that reduce the size of a linear-transformer-driver (LTD) brick. Both designs operate at 200 kV and a peak current of ~50 kA. At these parameters, both achieve a jitter of less than 2 ns and a prefire rate of ~0.1% over 5000 shots. We have reduced the number of switch parts and assembly steps, which has resulted in a more uniform, design-driven assembly process. We will characterize the performance of tungsten-copper and graphite electrodes, and two different electrode geometries. The new switch designs will substantially improve the electrical and operational performance of next-generation pulsed-power accelerators.

  20. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    Liao, Ben-Shan; Bai, Zhaojun; /UC, Davis; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  1. Crossing the threshold

    Bush, John; Tambasco, Lucas

    2017-11-01

    First, we summarize the circumstances in which chaotic pilot-wave dynamics gives rise to quantum-like statistical behavior. For ``closed'' systems, in which the droplet is confined to a finite domain either by boundaries or applied forces, quantum-like features arise when the persistence time of the waves exceeds the time required for the droplet to cross its domain. Second, motivated by the similarities between this hydrodynamic system and stochastic electrodynamics, we examine the behavior of a bouncing droplet above the Faraday threshold, where a stochastic element is introduced into the drop dynamics by virtue of its interaction with a background Faraday wave field. With a view to extending the dynamical range of pilot-wave systems to capture more quantum-like features, we consider a generalized theoretical framework for stochastic pilot-wave dynamics in which the relative magnitudes of the drop-generated pilot-wave field and a stochastic background field may be varied continuously. We gratefully acknowledge the financial support of the NSF through their CMMI and DMS divisions.

  2. Albania - Thresholds I and II

    Millennium Challenge Corporation — From 2006 to 2011, the government of Albania (GOA) received two Millennium Challenge Corporation (MCC) Threshold Programs totaling $29.6 million. Albania received...

  3. Linear programming

    Karloff, Howard

    1991-01-01

    To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...

  4. Next-generation sequencing offers new insights into DNA degradation

    Overballe-Petersen, Søren; Orlando, Ludovic Antoine Alexandre; Willerslev, Eske

    2012-01-01

    The processes underlying DNA degradation are central to various disciplines, including cancer research, forensics and archaeology. The sequencing of ancient DNA molecules on next-generation sequencing platforms provides direct measurements of cytosine deamination, depurination and fragmentation...... rates that previously were obtained only from extrapolations of results from in vitro kinetic experiments performed over short timescales. For example, recent next-generation sequencing of ancient DNA reveals purine bases as one of the main targets of postmortem hydrolytic damage, through base...... elimination and strand breakage. It also shows substantially increased rates of DNA base-loss at guanosine. In this review, we argue that the latter results from an electron resonance structure unique to guanosine rather than adenosine having an extra resonance structure over guanosine as previously suggested....

  5. Recent progress in nanostructured next-generation field emission devices

    Mittal, Gaurav; Lahiri, Indranil

    2014-01-01

    Field emission has been known to mankind for more than a century, and extensive research in this field for the last 40–50 years has led to development of exciting applications such as electron sources, miniature x-ray devices, display materials, etc. In the last decade, large-area field emitters were projected as an important material to revolutionize healthcare and medical devices, and space research. With the advent of nanotechnology and advancements related to carbon nanotubes, field emitters are demonstrating highly enhanced performance and novel applications. Next-generation emitters need ultra-high emission current density, high brightness, excellent stability and reproducible performance. Novel design considerations and application of new materials can lead to achievement of these capabilities. This article presents an overview of recent developments in this field and their effects on improved performance of field emitters. These advancements are demonstrated to hold great potential for application in next-generation field emission devices. (topical review)

  6. Recent progress in nanostructured next-generation field emission devices

    Mittal, Gaurav; Lahiri, Indranil

    2014-08-01

    Field emission has been known to mankind for more than a century, and extensive research in this field for the last 40-50 years has led to development of exciting applications such as electron sources, miniature x-ray devices, display materials, etc. In the last decade, large-area field emitters were projected as an important material to revolutionize healthcare and medical devices, and space research. With the advent of nanotechnology and advancements related to carbon nanotubes, field emitters are demonstrating highly enhanced performance and novel applications. Next-generation emitters need ultra-high emission current density, high brightness, excellent stability and reproducible performance. Novel design considerations and application of new materials can lead to achievement of these capabilities. This article presents an overview of recent developments in this field and their effects on improved performance of field emitters. These advancements are demonstrated to hold great potential for application in next-generation field emission devices.

  7. Next-Generation Sequencing for Binary Protein-Protein Interactions

    Bernhard eSuter

    2015-12-01

    Full Text Available The yeast two-hybrid (Y2H system exploits host cell genetics in order to display binary protein-protein interactions (PPIs via defined and selectable phenotypes. Numerous improvements have been made to this method, adapting the screening principle for diverse applications, including drug discovery and the scale-up for proteome wide interaction screens in human and other organisms. Here we discuss a systematic workflow and analysis scheme for screening data generated by Y2H and related assays that includes high-throughput selection procedures, readout of comprehensive results via next-generation sequencing (NGS, and the interpretation of interaction data via quantitative statistics. The novel assays and tools will serve the broader scientific community to harness the power of NGS technology to address PPI networks in health and disease. We discuss examples of how this next-generation platform can be applied to address specific questions in diverse fields of biology and medicine.

  8. Next-Generation Sequencing: From Understanding Biology to Personalized Medicine

    Benjamin Meder

    2013-03-01

    Full Text Available Within just a few years, the new methods for high-throughput next-generation sequencing have generated completely novel insights into the heritability and pathophysiology of human disease. In this review, we wish to highlight the benefits of the current state-of-the-art sequencing technologies for genetic and epigenetic research. We illustrate how these technologies help to constantly improve our understanding of genetic mechanisms in biological systems and summarize the progress made so far. This can be exemplified by the case of heritable heart muscle diseases, so-called cardiomyopathies. Here, next-generation sequencing is able to identify novel disease genes, and first clinical applications demonstrate the successful translation of this technology into personalized patient care.

  9. Precision medicine for cancer with next-generation functional diagnostics.

    Friedman, Adam A; Letai, Anthony; Fisher, David E; Flaherty, Keith T

    2015-12-01

    Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.

  10. Linear gate with prescaled window

    Koch, J; Bissem, H H; Krause, H; Scobel, W [Hamburg Univ. (Germany, F.R.). 1. Inst. fuer Experimentalphysik

    1978-07-15

    An electronic circuit is described that combines the features of a linear gate, a single channel analyzer and a prescaler. It allows selection of a pulse height region between two adjustable thresholds and scales the intensity of the spectrum within this window down by a factor 2sup(N) (0<=N<=9), whereas the complementary part of the spectrum is transmitted without being affected.

  11. Next-Generation Sequencing of Antibody Display Repertoires

    Romain Rouet

    2018-02-01

    Full Text Available In vitro selection technology has transformed the development of therapeutic monoclonal antibodies. Using methods such as phage, ribosome, and yeast display, high affinity binders can be selected from diverse repertoires. Here, we review strategies for the next-generation sequencing (NGS of phage- and other antibody-display libraries, as well as NGS platforms and analysis tools. Moreover, we discuss recent examples relating to the use of NGS to assess library diversity, clonal enrichment, and affinity maturation.

  12. Next-Generation Sequencing in the Mycology Lab.

    Zoll, Jan; Snelders, Eveline; Verweij, Paul E; Melchers, Willem J G

    New state-of-the-art techniques in sequencing offer valuable tools in both detection of mycobiota and in understanding of the molecular mechanisms of resistance against antifungal compounds and virulence. Introduction of new sequencing platform with enhanced capacity and a reduction in costs for sequence analysis provides a potential powerful tool in mycological diagnosis and research. In this review, we summarize the applications of next-generation sequencing techniques in mycology.

  13. Statistical Approaches for Next-Generation Sequencing Data

    Qiao, Dandi

    2012-01-01

    During the last two decades, genotyping technology has advanced rapidly, which enabled the tremendous success of genome-wide association studies (GWAS) in the search of disease susceptibility loci (DSLs). However, only a small fraction of the overall predicted heritability can be explained by the DSLs discovered. One possible explanation for this ”missing heritability” phenomenon is that many causal variants are rare. The recent development of high-throughput next-generation sequencing (NGS) ...

  14. Towards next-generation biodiversity assessment using DNA metabarcoding

    Taberlet, Pierre; Coissac, Eric; Pompanon, Francois

    2012-01-01

    Virtually all empirical ecological studies require species identification during data collection. DNA metabarcoding refers to the automated identification of multiple species from a single bulk sample containing entire organisms or from a single environmental sample containing degraded DNA (soil......, water, faeces, etc.). It can be implemented for both modern and ancient environmental samples. The availability of next-generation sequencing platforms and the ecologists need for high-throughput taxon identification have facilitated the emergence of DNA metabarcoding. The potential power of DNA...

  15. Next-generation digital information storage in DNA.

    Church, George M; Gao, Yuan; Kosuri, Sriram

    2012-09-28

    Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.

  16. Calibration of the neutron scintillation counter threshold

    Noga, V.I.; Ranyuk, Yu.N.; Telegin, Yu.N.

    1978-01-01

    A method for calibrating the threshold of a neutron counter in the form of a 10x10x40 cm plastic scintillator is described. The method is based on the evaluation of the Compton boundary of γ-spectrum from the discrimination curve of counter loading. The results of calibration using 60 Co and 24 Na γ-sources are given. In order to eValuate the Compton edge rapidly, linear extrapolation of the linear part of the discrimination curve towards its intersection with the X axis is recommended. Special measurements have shown that the calibration results do not practically depend on the distance between the cathode of a photomultiplier and the place where collimated γ-radiation of the calibration source reaches the scintillator

  17. Threshold Concepts and Information Literacy

    Townsend, Lori; Brunetti, Korey; Hofer, Amy R.

    2011-01-01

    What do we teach when we teach information literacy in higher education? This paper describes a pedagogical approach to information literacy that helps instructors focus content around transformative learning thresholds. The threshold concept framework holds promise for librarians because it grounds the instructor in the big ideas and underlying…

  18. Next-Generation Sequencing of Tubal Intraepithelial Carcinomas.

    McDaniel, Andrew S; Stall, Jennifer N; Hovelson, Daniel H; Cani, Andi K; Liu, Chia-Jen; Tomlins, Scott A; Cho, Kathleen R

    2015-11-01

    High-grade serous carcinoma (HGSC) is the most prevalent and lethal form of ovarian cancer. HGSCs frequently arise in the distal fallopian tubes rather than the ovary, developing from small precursor lesions called serous tubal intraepithelial carcinomas (TICs, or more specifically, STICs). While STICs have been reported to harbor TP53 mutations, detailed molecular characterizations of these lesions are lacking. We performed targeted next-generation sequencing (NGS) on formalin-fixed, paraffin-embedded tissue from 4 women, 2 with HGSC and 2 with uterine endometrioid carcinoma (UEC) who were diagnosed as having synchronous STICs. We detected concordant mutations in both HGSCs with synchronous STICs, including TP53 mutations as well as assumed germline BRCA1/2 alterations, confirming a clonal association between these lesions. Next-generation sequencing confirmed the presence of a STIC clonally unrelated to 1 case of UEC, and NGS of the other tubal lesion diagnosed as a STIC unexpectedly supported the lesion as a micrometastasis from the associated UEC. We demonstrate that targeted NGS can identify genetic alterations in minute lesions, such as TICs, and confirm TP53 mutations as early driving events for HGSC. Next-generation sequencing also demonstrated unexpected associations between presumed STICs and synchronous carcinomas, providing evidence that some TICs are actually metastases rather than HGSC precursors.

  19. Effects of polarization and absorption on laser induced optical breakdown threshold for skin rejuvenation

    Varghese, Babu; Bonito, Valentina; Turco, Simona; Verhagen, Rieko

    2016-03-01

    Laser induced optical breakdown (LIOB) is a non-linear absorption process leading to plasma formation at locations where the threshold irradiance for breakdown is surpassed. In this paper we experimentally demonstrate the influence of polarization and absorption on laser induced breakdown threshold in transparent, absorbing and scattering phantoms made from water suspensions of polystyrene microspheres. We demonstrate that radially polarized light yields a lower irradiance threshold for creating optical breakdown compared to linearly polarized light. We also demonstrate that the thermal initiation pathway used for generating seed electrons results in a lower irradiance threshold compared to multiphoton initiation pathway used for optical breakdown.

  20. Reduction of Linear Programming to Linear Approximation

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  1. On the derivation of the ionisation threshold law

    Peterkop, R.

    1983-01-01

    The different procedures for derivation of the electron-atom ionisation threshold law have been analysed and the reasons for discrepancies in the results are pointed out. It is shown that if the wavefunction has a linear node at equal electron distances (r 1 =r 2 ), then the threshold law for the total cross section has the form σ approx. Esup(3m), where σ approx. Esup(m) is the Wannier law. The distribution of energy between escaping electrons is non-uniform and has a parabolic node at equal energies (epsilon 1 = epsilon 2 ). The linear node at opposite directions of electrons (theta = π) does not change the Wannier law but leads to a parabolic node in angular distribution at theta = π. The existence of both nodes leads to the threshold law σ approx. Esup(3m) and to parabolic nodes in energy and angular distributions. (author)

  2. A practical threshold concept for simple and reasonable radiation protection

    Kaneko, Masahito

    2002-01-01

    A half century ago it was assumed for the purpose of protection that radiation risks are linearly proportional at all levels of dose. Linear No-Threshold (LNT) hypothesis has greatly contributed to the minimization of doses received by workers and members of the public, while it has brought about 'radiophobia' and unnecessary over-regulation. Now that the existence of bio-defensive mechanisms such as DNA repair, apoptosis and adaptive response are well recognized, the linearity assumption can be said 'unscientific'. Evidences increasingly imply that there are threshold effects in risk of radiation. A concept of 'practical' thresholds is proposed and the classification of 'stochastic' and 'deterministic' radiation effects should be abandoned. 'Practical' thresholds are dose levels below which induction of detectable radiogenic cancers or hereditary effects are not expected. There seems to be no evidence of deleterious health effects from radiation exposures at the current dose limits (50 mSv/y for workers and 5 mSv/y for members of the public), which have been adopted worldwide in the latter half of the 20th century. Those limits are assumed to have been set below certain 'practical' thresholds. As any workers and members of the public do not gain benefits from being exposed, excepting intentional irradiation for medical purposes, their radiation exposures should be kept below 'practical' thresholds. There is no use of 'justification' and 'optimization' (ALARA) principles, because there are no 'radiation detriments' as far as exposures are maintained below 'practical' thresholds. Accordingly the ethical issue of 'justification' to allow benefit to society to offset radiation detriments to individuals can be resolved. And also the ethical issue of 'optimization' to exchange health or safety for economical gain can be resolved. The ALARA principle should be applied to the probability (risk) of exceeding relevant dose limits instead of applying to normal exposures

  3. Music effect on pain threshold evaluated with current perception threshold

    2001-01-01

    AIM: Music relieves anxiety and psychotic tension. This effect of music is applied to surgical operation in the hospital and dental office. It is still unclear whether this music effect is only limited to the psychological aspect but not to the physical aspect or whether its music effect is influenced by the mood or emotion of audience. To elucidate these issues, we evaluated the music effect on pain threshold by current perception threshold (CPT) and profile of mood states (POMC) test. METHODS: Healthy 30 subjects (12 men, 18 women, 25-49 years old, mean age 34.9) were tested. (1)After POMC test, all subjects were evaluated pain threshold with CPT by Neurometer (Radionics, USA) under 6 conditions, silence, listening to the slow tempo classic music, nursery music, hard rock music, classic paino music and relaxation music with 30 seconds interval. (2)After Stroop color word test as the stresser, pain threshold was evaluated with CPT under 2 conditions, silence and listening to the slow tempo classic music. RESULTS: Under litening to the music, CPT sores increased, especially 2 000 Hz level related with compression, warm and pain sensation. Type of music, preference of music and stress also affected CPT score. CONCLUSION: The present study demonstrated that the concentration on the music raise the pain threshold and that stress and mood influence the music effect on pain threshold.

  4. 40-Gb/s PAM4 with low-complexity equalizers for next-generation PON systems

    Tang, Xizi; Zhou, Ji; Guo, Mengqi; Qi, Jia; Hu, Fan; Qiao, Yaojun; Lu, Yueming

    2018-01-01

    In this paper, we demonstrate 40-Gb/s four-level pulse amplitude modulation (PAM4) transmission with 10 GHz devices and low-complexity equalizers for next-generation passive optical network (PON) systems. Simple feed-forward equalizer (FFE) and decision feedback equalizer (DFE) enable 20 km fiber transmission while high-complexity Volterra algorithm in combination with FFE and DFE can extend the transmission distance to 40 km. A simplified Volterra algorithm is proposed for reducing computational complexity. Simulation results show that the simplified Volterra algorithm reduces up to ∼75% computational complexity at a relatively low cost of only 0.4 dB power budget. At a forward error correction (FEC) threshold of 10-3 , we achieve 31.2 dB and 30.8 dB power budget over 40 km fiber transmission using traditional FFE-DFE-Volterra and our simplified FFE-DFE-Volterra, respectively.

  5. A memory-efficient data structure representing exact-match overlap graphs with application for next-generation DNA assembly.

    Dinh, Hieu; Rajasekaran, Sanguthevar

    2011-07-15

    Exact-match overlap graphs have been broadly used in the context of DNA assembly and the shortest super string problem where the number of strings n ranges from thousands to billions. The length ℓ of the strings is from 25 to 1000, depending on the DNA sequencing technologies. However, many DNA assemblers using overlap graphs suffer from the need for too much time and space in constructing the graphs. It is nearly impossible for these DNA assemblers to handle the huge amount of data produced by the next-generation sequencing technologies where the number n of strings could be several billions. If the overlap graph is explicitly stored, it would require Ω(n(2)) memory, which could be prohibitive in practice when n is greater than a hundred million. In this article, we propose a novel data structure using which the overlap graph can be compactly stored. This data structure requires only linear time to construct and and linear memory to store. For a given set of input strings (also called reads), we can informally define an exact-match overlap graph as follows. Each read is represented as a node in the graph and there is an edge between two nodes if the corresponding reads overlap sufficiently. A formal description follows. The maximal exact-match overlap of two strings x and y, denoted by ov(max)(x, y), is the longest string which is a suffix of x and a prefix of y. The exact-match overlap graph of n given strings of length ℓ is an edge-weighted graph in which each vertex is associated with a string and there is an edge (x, y) of weight ω=ℓ-|ov(max)(x, y)| if and only if ω ≤ λ, where |ov(max)(x, y)| is the length of ov(max)(x, y) and λ is a given threshold. In this article, we show that the exact-match overlap graphs can be represented by a compact data structure that can be stored using at most (2λ-1)(2⌈logn⌉+⌈logλ⌉)n bits with a guarantee that the basic operation of accessing an edge takes O(log λ) time. We also propose two algorithms for

  6. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  7. IceCube Gen2. The next-generation neutrino observatory for the South Pole

    Santen, Jakob van [DESY, Zeuthen (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The IceCube Neutrino Observatory is a cubic-kilometer Cherenkov telescope buried in the ice sheet at the South Pole that detects neutrinos of all flavors with energies from tens of GeV to several PeV. The instrument provided the first measurement of the flux of high-energy astrophysical neutrinos, opening a new window to the TeV universe. At the other end of its sensitivity range, IceCube has provided precision measurements of neutrino oscillation parameters that are competitive with dedicated accelerator-based experiments. Here we present design studies for IceCube Gen2, the next-generation neutrino observatory for the South Pole. Instrumenting a volume of more that 5 km{sup 3} with over 100 new strings, IceCube Gen2 will have substantially greater sensitivity to high-energy neutrinos than current-generation instruments. PINGU, a dense infill array, will lower the energy threshold of the inner detector region to 4 GeV, allowing a determination of the neutrino mass hierarchy. On the surface, a large air shower detector will veto high-energy atmospheric muons and neutrinos from the southern hemisphere, enhancing the reach of astrophysical neutrino searches. With its versatile instrumentation, the IceCube Gen2 facility will allow us to explore the neutrino sky with unprecedented sensitivity, providing new constraints on the sources of the highest-energy cosmic rays, and yield precision data on the mixing and mass ordering of neutrinos.

  8. Parton distributions with threshold resummation

    Bonvini, Marco; Rojo, Juan; Rottoli, Luca; Ubiali, Maria; Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Hartland, Nathan P.

    2015-01-01

    We construct a set of parton distribution functions (PDFs) in which fixed-order NLO and NNLO calculations are supplemented with soft-gluon (threshold) resummation up to NLL and NNLL accuracy respectively, suitable for use in conjunction with any QCD calculation in which threshold resummation is included at the level of partonic cross sections. These resummed PDF sets, based on the NNPDF3.0 analysis, are extracted from deep-inelastic scattering, Drell-Yan, and top quark pair production data, for which resummed calculations can be consistently used. We find that, close to threshold, the inclusion of resummed PDFs can partially compensate the enhancement in resummed matrix elements, leading to resummed hadronic cross-sections closer to the fixed-order calculation. On the other hand, far from threshold, resummed PDFs reduce to their fixed-order counterparts. Our results demonstrate the need for a consistent use of resummed PDFs in resummed calculations.

  9. linear-quadratic-linear model

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  10. Conceptions of nuclear threshold status

    Quester, G.H.

    1991-01-01

    This paper reviews some alternative definitions of nuclear threshold status. Each of them is important, and major analytical confusions would result if one sense of the term is mistaken for another. The motives for nations entering into such threshold status are a blend of civilian and military gains, and of national interests versus parochial or bureaucratic interests. A portion of the rationale for threshold status emerges inevitably from the pursuit of economic goals, and another portion is made more attraction by the derives of the domestic political process. Yet the impact on international security cannot be dismissed, especially where conflicts among the states remain real. Among the military or national security motives are basic deterrence, psychological warfare, war-fighting and, more generally, national prestige. In the end, as the threshold phenomenon is assayed for lessons concerning the role of nuclear weapons more generally in international relations and security, one might conclude that threshold status and outright proliferation coverage to a degree in the motives for all of the states involved and in the advantages attained. As this paper has illustrated, nuclear threshold status is more subtle and more ambiguous than outright proliferation, and it takes considerable time to sort out the complexities. Yet the world has now had a substantial amount of time to deal with this ambiguous status, and this may tempt more states to exploit it

  11. Droplet Digital™ PCR Next-Generation Sequencing Library QC Assay.

    Heredia, Nicholas J

    2018-01-01

    Digital PCR is a valuable tool to quantify next-generation sequencing (NGS) libraries precisely and accurately. Accurately quantifying NGS libraries enable accurate loading of the libraries on to the sequencer and thus improve sequencing performance by reducing under and overloading error. Accurate quantification also benefits users by enabling uniform loading of indexed/barcoded libraries which in turn greatly improves sequencing uniformity of the indexed/barcoded samples. The advantages gained by employing the Droplet Digital PCR (ddPCR™) library QC assay includes the precise and accurate quantification in addition to size quality assessment, enabling users to QC their sequencing libraries with confidence.

  12. Next-generation sequencing in schizophrenia and other neuropsychiatric disorders.

    Schreiber, Matthew; Dorschner, Michael; Tsuang, Debby

    2013-10-01

    Schizophrenia is a debilitating lifelong illness that lacks a cure and poses a worldwide public health burden. The disease is characterized by a heterogeneous clinical and genetic presentation that complicates research efforts to identify causative genetic variations. This review examines the potential of current findings in schizophrenia and in other related neuropsychiatric disorders for application in next-generation technologies, particularly whole-exome sequencing (WES) and whole-genome sequencing (WGS). These approaches may lead to the discovery of underlying genetic factors for schizophrenia and may thereby identify and target novel therapeutic targets for this devastating disorder. © 2013 Wiley Periodicals, Inc.

  13. A Survey on Next-generation Power Grid Data Architecture

    You, Shutang [University of Tennessee, Knoxville (UTK); Zhu, Dr. Lin [University of Tennessee (UT); Liu, Yong [ORNL; Liu, Yilu [ORNL; Shankar, Mallikarjun (Arjun) [ORNL; Robertson, Russell [Grid Protection Alliance; King Jr, Thomas J [ORNL

    2015-01-01

    The operation and control of power grids will increasingly rely on data. A high-speed, reliable, flexible and secure data architecture is the prerequisite of the next-generation power grid. This paper summarizes the challenges in collecting and utilizing power grid data, and then provides reference data architecture for future power grids. Based on the data architecture deployment, related research on data architecture is reviewed and summarized in several categories including data measurement/actuation, data transmission, data service layer, data utilization, as well as two cross-cutting issues, interoperability and cyber security. Research gaps and future work are also presented.

  14. Linear and non-linear video and TV applications using IPv6 and IPv6 multicast

    Minoli, Daniel

    2012-01-01

    Provides options for implementing IPv6 and IPv6 multicast in service provider networks New technologies, viewing paradigms, and content distribution approaches are taking the TV/video services industry by storm. Linear and Nonlinear Video and TV Applications: Using IPv6 and IPv6 Multicast identifies five emerging trends in next-generation delivery of entertainment-quality video. These trends are observable and can be capitalized upon by progressive service providers, telcos, cable operators, and ISPs. This comprehensive guide explores these evolving directions in the TV/v

  15. Next-generation storm tracking for minimizing service interruption

    Sznaider, R. [Meteorlogix, Minneapolis, MN (United States)

    2002-08-01

    Several technological changes have taken place in the field of weather radar since its discovery during World War II. A wide variety of industries have benefited over the years from conventional weather radar displays, providing assistance in forecasting and estimating the potential severity of storms. The characteristics of individual storm cells can now be derived from the next-generation of weather radar systems (NEXRAD). The determination of which storm cells possess distinct features such as large hail or developing tornadoes was made possible through the fusing of various pieces of information with radar pictures. To exactly determine when and where a storm will hit, this data can be combined and overlaid into a display that includes the geographical physical landmarks of a specific region. Combining Geographic Information Systems (GIS) and storm tracking provides a more complete, timely and accurate forecast, which clearly benefits the electric utilities industries. The generation and production of energy are dependent on how hot or cold it will be today and tomorrow. The author described each major feature of this next-generation weather radar system. 9 figs.

  16. Next-generation nozzle check valve significantly reduces operating costs

    Roorda, O. [SMX International, Toronto, ON (Canada)

    2009-01-15

    Check valves perform an important function in preventing reverse flow and protecting plant and mechanical equipment. However, the variety of different types of valves and extreme differences in performance even within one type can change maintenance requirements and life cycle costs, amounting to millions of dollars over the typical 15-year design life of piping components. A next-generation non-slam nozzle check valve which prevents return flow has greatly reduced operating costs by protecting the mechanical equipment in a piping system. This article described the check valve varieties such as the swing check valve, a dual-plate check valve, and nozzle check valves. Advancements in optimized design of a non-slam nozzle check valve were also discussed, with particular reference to computer flow modelling such as computational fluid dynamics; computer stress modelling such as finite element analysis; and flow testing (using rapid prototype development and flow loop testing), both to improve dynamic performance and reduce hydraulic losses. The benefits of maximized dynamic performance and minimized pressure loss from the new designed valve were also outlined. It was concluded that this latest non-slam nozzle check valve design has potential applications in natural gas, liquefied natural gas, and oil pipelines, including subsea applications, as well as refineries, and petrochemical plants among others, and is suitable for horizontal and vertical installation. The result of this next-generation nozzle check valve design is not only superior performance, and effective protection of mechanical equipment but also minimized life cycle costs. 1 fig.

  17. Safety reviews of next-generation light-water reactors

    Kudrick, J.A.; Wilson, J.N.

    1997-01-01

    The Nuclear Regulatory Commission (NRC) is reviewing three applications for design certification under its new licensing process. The U.S. Advanced Boiling Water Reactor (ABWR) and System 80+ designs have received final design approvals. The AP600 design review is continuing. The goals of design certification are to achieve early resolution of safety issues and to provide a more stable and predictable licensing process. NRC also reviewed the Utility Requirements Document (URD) of the Electric Power Research Institute (EPRI) and determined that its guidance does not conflict with NRC requirements. This review led to the identification and resolution of many generic safety issues. The NRC determined that next-generation reactor designs should achieve a higher level of safety for selected technical and severe accident issues. Accordingly, NRC developed new review standards for these designs based on (1) operating experience, including the accident at Three Mile Island, Unit 2; (2) the results of probabilistic risk assessments of current and next-generation reactor designs; (3) early efforts on severe accident rulemaking; and (4) research conducted to address previously identified generic safety issues. The additional standards were used during the individual design reviews and the resolutions are documented in the design certification rules. 12 refs

  18. Standardization and quality management in next-generation sequencing.

    Endrullat, Christoph; Glökler, Jörn; Franke, Philipp; Frohme, Marcus

    2016-09-01

    DNA sequencing continues to evolve quickly even after > 30 years. Many new platforms suddenly appeared and former established systems have vanished in almost the same manner. Since establishment of next-generation sequencing devices, this progress gains momentum due to the continually growing demand for higher throughput, lower costs and better quality of data. In consequence of this rapid development, standardized procedures and data formats as well as comprehensive quality management considerations are still scarce. Here, we listed and summarized current standardization efforts and quality management initiatives from companies, organizations and societies in form of published studies and ongoing projects. These comprise on the one hand quality documentation issues like technical notes, accreditation checklists and guidelines for validation of sequencing workflows. On the other hand, general standard proposals and quality metrics are developed and applied to the sequencing workflow steps with the main focus on upstream processes. Finally, certain standard developments for downstream pipeline data handling, processing and storage are discussed in brief. These standardization approaches represent a first basis for continuing work in order to prospectively implement next-generation sequencing in important areas such as clinical diagnostics, where reliable results and fast processing is crucial. Additionally, these efforts will exert a decisive influence on traceability and reproducibility of sequence data.

  19. Next-Generation Beneficial Microbes: The Case of Akkermansia muciniphila

    Patrice D. Cani

    2017-09-01

    Full Text Available Metabolic disorders associated with obesity and cardiometabolic disorders are worldwide epidemic. Among the different environmental factors, the gut microbiota is now considered as a key player interfering with energy metabolism and host susceptibility to several non-communicable diseases. Among the next-generation beneficial microbes that have been identified, Akkermansia muciniphila is a promising candidate. Indeed, A. muciniphila is inversely associated with obesity, diabetes, cardiometabolic diseases and low-grade inflammation. Besides the numerous correlations observed, a large body of evidence has demonstrated the causal beneficial impact of this bacterium in a variety of preclinical models. Translating these exciting observations to human would be the next logic step and it now appears that several obstacles that would prevent the use of A. muciniphila administration in humans have been overcome. Moreover, several lines of evidence indicate that pasteurization of A. muciniphila not only increases its stability but more importantly increases its efficacy. This strongly positions A. muciniphila in the forefront of next-generation candidates for developing novel food or pharma supplements with beneficial effects. Finally, a specific protein present on the outer membrane of A. muciniphila, termed Amuc_1100, could be strong candidate for future drug development. In conclusion, as plants and its related knowledge, known as pharmacognosy, have been the source for designing drugs over the last century, we propose that microbes and microbiomegnosy, or knowledge of our gut microbiome, can become a novel source of future therapies.

  20. Doubler system quench detection threshold

    Kuepke, K.; Kuchnir, M.; Martin, P.

    1983-01-01

    The experimental study leading to the determination of the sensitivity needed for protecting the Fermilab Doubler from damage during quenches is presented. The quench voltage thresholds involved were obtained from measurements made on Doubler cable of resistance x temperature and voltage x time during quenches under several currents and from data collected during operation of the Doubler Quench Protection System as implemented in the B-12 string of 20 magnets. At 4kA, a quench voltage threshold in excess of 5.OV will limit the peak Doubler cable temperature to 452K for quenches originating in the magnet coils whereas a threshold of 0.5V is required for quenches originating outside of coils

  1. Thermotactile perception thresholds measurement conditions.

    Maeda, Setsuo; Sakakibara, Hisataka

    2002-10-01

    The purpose of this paper is to investigate the effects of posture, push force and rate of temperature change on thermotactile thresholds and to clarify suitable measuring conditions for Japanese people. Thermotactile (warm and cold) thresholds on the right middle finger were measured with an HVLab thermal aesthesiometer. Subjects were eight healthy male Japanese students. The effects of posture in measurement were examined in the posture of a straight hand and forearm placed on a support, the same posture without a support, and the fingers and hand flexed at the wrist with the elbow placed on a desk. The finger push force applied to the applicator of the thermal aesthesiometer was controlled at a 0.5, 1.0, 2.0 and 3.0 N. The applicator temperature was changed to 0.5, 1.0, 1.5, 2.0 and 2.5 degrees C/s. After each measurement, subjects were asked about comfort under the measuring conditions. Three series of experiments were conducted on different days to evaluate repeatability. Repeated measures ANOVA showed that warm thresholds were affected by the push force and the rate of temperature change and that cold thresholds were influenced by posture and push force. The comfort assessment indicated that the measurement posture of a straight hand and forearm laid on a support was the most comfortable for the subjects. Relatively high repeatability was obtained under measurement conditions of a 1 degrees C/s temperature change rate and a 0.5 N push force. Measurement posture, push force and rate of temperature change can affect the thermal threshold. Judging from the repeatability, a push force of 0.5 N and a temperature change of 1.0 degrees C/s in the posture with the straight hand and forearm laid on a support are recommended for warm and cold threshold measurements.

  2. DOE approach to threshold quantities

    Wickham, L.E.; Kluk, A.F.; Department of Energy, Washington, DC)

    1985-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Ideally, the threshold must be set high enough to significantly reduce the amount of waste requiring special handling. It must also be low enough so that waste at the threshold quantity poses a very small health risk and multiple exposures to such waste would still constitute a small health risk. It should also be practical to segregate waste above or below the threshold quantity using available instrumentation. Guidance is being prepared to aid DOE sites in establishing threshold quantity values based on pathways analysis using site-specific parameters (waste stream characteristics, maximum exposed individual, population considerations, and site specific parameters such as rainfall, etc.). A guidance dose of between 0.001 to 1.0 mSv/y (0.1 to 100 mrem/y) was recommended with 0.3 mSv/y (30 mrem/y) selected as the guidance dose upon which to base calculations. Several tasks were identified, beginning with the selection of a suitable pathway model for relating dose to the concentration of radioactivity in the waste. Threshold concentrations corresponding to the guidance dose were determined for waste disposal sites at a selected humid and arid site. Finally, cost-benefit considerations at the example sites were addressed. The results of the various tasks are summarized and the relationship of this effort with related developments at other agencies discussed

  3. Competitive inhibition can linearize dose-response and generate a linear rectifier.

    Savir, Yonatan; Tu, Benjamin P; Springer, Michael

    2015-09-23

    Many biological responses require a dynamic range that is larger than standard bi-molecular interactions allow, yet the also ability to remain off at low input. Here we mathematically show that an enzyme reaction system involving a combination of competitive inhibition, conservation of the total level of substrate and inhibitor, and positive feedback can behave like a linear rectifier-that is, a network motif with an input-output relationship that is linearly sensitive to substrate above a threshold but unresponsive below the threshold. We propose that the evolutionarily conserved yeast SAGA histone acetylation complex may possess the proper physiological response characteristics and molecular interactions needed to perform as a linear rectifier, and we suggest potential experiments to test this hypothesis. One implication of this work is that linear responses and linear rectifiers might be easier to evolve or synthetically construct than is currently appreciated.

  4. A threshold for dissipative fission

    Thoennessen, M.; Bertsch, G.F.

    1993-01-01

    The empirical domain of validity of statistical theory is examined as applied to fission data on pre-fission data on pre-fission neutron, charged particle, and γ-ray multiplicities. Systematics are found of the threshold excitation energy for the appearance of nonstatistical fission. From the data on systems with not too high fissility, the relevant phenomenological parameter is the ratio of the threshold temperature T thresh to the (temperature-dependent) fission barrier height E Bar (T). The statistical model reproduces the data for T thresh /E Bar (T) thresh /E Bar (T) independent of mass and fissility of the systems

  5. Thresholds in chemical respiratory sensitisation.

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-03

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the

  6. Optimization Problems on Threshold Graphs

    Elena Nechita

    2010-06-01

    Full Text Available During the last three decades, different types of decompositions have been processed in the field of graph theory. Among these we mention: decompositions based on the additivity of some characteristics of the graph, decompositions where the adjacency law between the subsets of the partition is known, decompositions where the subgraph induced by every subset of the partition must have predeterminate properties, as well as combinations of such decompositions. In this paper we characterize threshold graphs using the weakly decomposition, determine: density and stability number, Wiener index and Wiener polynomial for threshold graphs.

  7. Nuclear threshold effects and neutron strength function

    Hategan, Cornel; Comisel, Horia

    2003-01-01

    One proves that a Nuclear Threshold Effect is dependent, via Neutron Strength Function, on Spectroscopy of Ancestral Neutron Threshold State. The magnitude of the Nuclear Threshold Effect is proportional to the Neutron Strength Function. Evidence for relation of Nuclear Threshold Effects to Neutron Strength Functions is obtained from Isotopic Threshold Effect and Deuteron Stripping Threshold Anomaly. The empirical and computational analysis of the Isotopic Threshold Effect and of the Deuteron Stripping Threshold Anomaly demonstrate their close relationship to Neutron Strength Functions. It was established that the Nuclear Threshold Effects depend, in addition to genuine Nuclear Reaction Mechanisms, on Spectroscopy of (Ancestral) Neutron Threshold State. The magnitude of the effect is proportional to the Neutron Strength Function, in their dependence on mass number. This result constitutes also a proof that the origins of these threshold effects are Neutron Single Particle States at zero energy. (author)

  8. Alternative method for determining anaerobic threshold in rowers

    Giovani Dos Santos Cunha

    2008-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2008v10n4p367 In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detect ventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposed determining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what value of RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic threshold and whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. The sample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer with concurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobic threshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variables were classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2 (58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic threshold they reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions - the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of 0.99, however, RER should exhibit a non-linear increase above this figure.

  9. Emittance control in linear colliders

    Ruth, R.D.

    1991-05-01

    In this paper, we discuss the generation and control of the emittance in a next-generation linear collider. The beams are extracted from a damping ring and compressed in length by the first bunch compressor. They are then accelerated in a preaccelerator linac up to an energy appropriate for injection into a high gradient linac. In many designs this pre-acceleration is followed by another bunch compression to reach a short bunch. After acceleration in the linac, the bunches are finally focused transversely to a small spot. The proposed vertical beam sizes at the interaction point are the order of a few nanometers while the horizontal sizes are about a factor of 100 larger. This cross-sectional area is about a factor of 10 4 smaller than the SLC. However, the main question is: what are the tolerances to achieve such a small size, and how do they compare to present techniques for alignment and stability? These tolerances are very design dependent. Alignment tolerances in the linac can vary from 1 μm to 100 μm depending upon the basic approach. In this paper we discuss techniques of emittance generation and control which move alignment tolerances to the 100 μm range

  10. Final Report, Next-Generation Mega-Voltage Cargo-Imaging System for Cargo Conainer Inspection, March 2007

    Dr. James Clayton, Ph.D., Varian Medical Systems-Security & Inspection Products; Dr. Emma Regentova, Ph.D, University of Nevada Las Vegas; Dr. Evangelos Yfantis, Ph.D., University of Nevada, Las Vegas

    2007-03-27

    The UNLV Research Foundation, as the primary award recipient, teamed with Varian Medical Systems-Security & Inspection Products and the University of Nevada Las Vegas (UNLV) for the purpose of conducting research and engineering related to a "next-generation" mega-voltage imaging (MVCI) system for inspection of cargo in large containers. The procurement and build-out of hardware for the MVCI project has been completed. The K-9 linear accelerator and an optimized X-ray detection system capable of efficiently detecting X-rays emitted from the accelerator after they have passed through the device is under test. The Office of Science financial assistance award has made possible the development of a system utilizing a technology which will have a profound positive impact on the security of U.S. seaports. The proposed project will ultimately result in critical research and development advances for the "next-generation" Linatron X-ray accelerator technology, thereby providing a safe, reliable and efficient fixed and mobile cargo inspection system, which will very significantly increase the fraction of cargo containers undergoing reliable inspection as the enter U.S. ports. Both NNSA/NA-22 and the Department of Homeland Security's Domestic Nuclear Detection Office are collaborating with UNLV and its team to make this technology available as soon as possible.

  11. Final Report, Next-Generation Mega-Voltage Cargo-Imaging System for Cargo Container Inspection, March 2007

    Dr. James Clayton, Ph.D., Varian Medical Systems-Security and Inspection Products; Dr. Emma Regentova, Ph.D, University of Nevada Las Vegas; Dr. Evangelos Yfantis, Ph.D., University of Nevada, Las Vegas

    2007-01-01

    The UNLV Research Foundation, as the primary award recipient, teamed with Varian Medical Systems-Security and Inspection Products and the University of Nevada Las Vegas (UNLV) for the purpose of conducting research and engineering related to a ''next-generation'' mega-voltage imaging (MVCI) system for inspection of cargo in large containers. The procurement and build-out of hardware for the MVCI project has been completed. The K-9 linear accelerator and an optimized X-ray detection system capable of efficiently detecting X-rays emitted from the accelerator after they have passed through the device is under test. The Office of Science financial assistance award has made possible the development of a system utilizing a technology which will have a profound positive impact on the security of U.S. seaports. The proposed project will ultimately result in critical research and development advances for the ''next-generation'' Linatron X-ray accelerator technology, thereby providing a safe, reliable and efficient fixed and mobile cargo inspection system, which will very significantly increase the fraction of cargo containers undergoing reliable inspection as the enter U.S. ports. Both NNSA/NA-22 and the Department of Homeland Security's Domestic Nuclear Detection Office are collaborating with UNLV and its team to make this technology available as soon as possible

  12. THRESHOLD LOGIC IN ARTIFICIAL INTELLIGENCE

    COMPUTER LOGIC, ARTIFICIAL INTELLIGENCE , BIONICS, GEOMETRY, INPUT OUTPUT DEVICES, LINEAR PROGRAMMING, MATHEMATICAL LOGIC, MATHEMATICAL PREDICTION, NETWORKS, PATTERN RECOGNITION, PROBABILITY, SWITCHING CIRCUITS, SYNTHESIS

  13. Percolation Threshold Parameters of Fluids

    Škvor, J.; Nezbeda, Ivo

    2009-01-01

    Roč. 79, č. 4 (2009), 041141-041147 ISSN 1539-3755 Institutional research plan: CEZ:AV0Z40720504 Keywords : percolation threshold * universality * infinite cluster Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.400, year: 2009

  14. Threshold analyses and Lorentz violation

    Lehnert, Ralf

    2003-01-01

    In the context of threshold investigations of Lorentz violation, we discuss the fundamental principle of coordinate independence, the role of an effective dynamical framework, and the conditions of positivity and causality. Our analysis excludes a variety of previously considered Lorentz-breaking parameters and opens an avenue for viable dispersion-relation investigations of Lorentz violation

  15. Threshold enhancement of diphoton resonances

    Aoife Bharucha

    2016-10-01

    Full Text Available We revisit a mechanism to enhance the decay width of (pseudo-scalar resonances to photon pairs when the process is mediated by loops of charged fermions produced near threshold. Motivated by the recent LHC data, indicating the presence of an excess in the diphoton spectrum at approximately 750 GeV, we illustrate this threshold enhancement mechanism in the case of a 750 GeV pseudoscalar boson A with a two-photon decay mediated by a charged and uncolored fermion having a mass at the 12MA threshold and a small decay width, <1 MeV. The implications of such a threshold enhancement are discussed in two explicit scenarios: i the Minimal Supersymmetric Standard Model in which the A state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through loops of charginos with masses close to 12MA and ii a two Higgs doublet model in which A is again produced by gluon fusion but decays into photons through loops of vector-like charged heavy leptons. In both these scenarios, while the mass of the charged fermion has to be adjusted to be extremely close to half of the A resonance mass, the small total widths are naturally obtained if only suppressed three-body decay channels occur. Finally, the implications of some of these scenarios for dark matter are discussed.

  16. Linear Algebra and Smarandache Linear Algebra

    Vasantha, Kandasamy

    2003-01-01

    The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...

  17. Association analysis using next-generation sequence data from publicly available control groups: the robust variance score statistic.

    Derkach, Andriy; Chiang, Theodore; Gong, Jiafen; Addis, Laura; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K; Strug, Lisa J

    2014-08-01

    Sufficiently powered case-control studies with next-generation sequence (NGS) data remain prohibitively expensive for many investigators. If feasible, a more efficient strategy would be to include publicly available sequenced controls. However, these studies can be confounded by differences in sequencing platform; alignment, single nucleotide polymorphism and variant calling algorithms; read depth; and selection thresholds. Assuming one can match cases and controls on the basis of ethnicity and other potential confounding factors, and one has access to the aligned reads in both groups, we investigate the effect of systematic differences in read depth and selection threshold when comparing allele frequencies between cases and controls. We propose a novel likelihood-based method, the robust variance score (RVS), that substitutes genotype calls by their expected values given observed sequence data. We show theoretically that the RVS eliminates read depth bias in the estimation of minor allele frequency. We also demonstrate that, using simulated and real NGS data, the RVS method controls Type I error and has comparable power to the 'gold standard' analysis with the true underlying genotypes for both common and rare variants. An RVS R script and instructions can be found at strug.research.sickkids.ca, and at https://github.com/strug-lab/RVS. lisa.strug@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Building a Robust Tumor Profiling Program: Synergy between Next-Generation Sequencing and Targeted Single-Gene Testing.

    Matthew C Hiemenz

    Full Text Available Next-generation sequencing (NGS is a powerful platform for identifying cancer mutations. Routine clinical adoption of NGS requires optimized quality control metrics to ensure accurate results. To assess the robustness of our clinical NGS pipeline, we analyzed the results of 304 solid tumor and hematologic malignancy specimens tested simultaneously by NGS and one or more targeted single-gene tests (EGFR, KRAS, BRAF, NPM1, FLT3, and JAK2. For samples that passed our validated tumor percentage and DNA quality and quantity thresholds, there was perfect concordance between NGS and targeted single-gene tests with the exception of two FLT3 internal tandem duplications that fell below the stringent pre-established reporting threshold but were readily detected by manual inspection. In addition, NGS identified clinically significant mutations not covered by single-gene tests. These findings confirm NGS as a reliable platform for routine clinical use when appropriate quality control metrics, such as tumor percentage and DNA quality cutoffs, are in place. Based on our findings, we suggest a simple workflow that should facilitate adoption of clinical oncologic NGS services at other institutions.

  19. Synchronization of low- and high-threshold motor units.

    Defreitas, Jason M; Beck, Travis W; Ye, Xin; Stock, Matt S

    2014-04-01

    We examined the degree of synchronization for both low- and high-threshold motor unit (MU) pairs at high force levels. MU spike trains were recorded from the quadriceps during high-force isometric leg extensions. Short-term synchronization (between -6 and 6 ms) was calculated for every unique MU pair for each contraction. At high force levels, earlier recruited motor unit pairs (low-threshold) demonstrated relatively low levels of short-term synchronization (approximately 7.3% extra firings than would have been expected by chance). However, the magnitude of synchronization increased significantly and linearly with mean recruitment threshold (reaching 22.1% extra firings for motor unit pairs recruited above 70% MVC). Three potential mechanisms that could explain the observed differences in synchronization across motor unit types are proposed and discussed. Copyright © 2013 Wiley Periodicals, Inc.

  20. Threshold law for positron-atom impact ionisation

    Temkin, A.

    1982-01-01

    The threshold law for ionisation of atoms by positron impact is adduced in analogy with the author's approach to the electron-atom ionisation. It is concluded the Coulomb-dipole region of potential gives the essential part of the interaction in both cases and leads to the same kind of result: a modulated linear law. An additional process which enters positron ionisation is positronium formation in the continuum, but that will not dominate the threshold yield. The result is in sharp contrast to the positron threshold law as recently derived by Klar (J. Phys. B.; 14:4165 (1981)) on the basis of a Wannier-type (Phys. Rev.; 90:817 (1953)) analysis. (author)

  1. Equipartitioning in linear accelerators

    Jameson, R.A.

    1982-01-01

    Emittance growth has long been a concern in linear accelerators, as has the idea that some kind of energy balance, or equipartitioning, between the degrees of freedom, would ameliorate the growth. M. Prome observed that the average transverse and longitudinal velocity spreads tend to equalize as current in the channel is increased, while the sum of the energy in the system stays nearly constant. However, only recently have we shown that an equipartitioning requirement on a bunched injected beam can indeed produce remarkably small emittance growth. The simple set of equations leading to this condition are outlined. At the same time, Hofmann has investigated collective instabilities in transported beams and has identified thresholds and regions in parameter space where instabilities occur. Evidence is presented that shows transport system boundaries to be quite accurate in computer simulations of accelerating systems. Discussed are preliminary results of efforts to design accelerators that avoid parameter regions where emittance is affected by the instabilities identified by Hofmann. These efforts suggest that other mechanisms are present. The complicated behavior of the RFQ linac in this framework also is shown

  2. Equipartitioning in linear accelerators

    Jameson, R.A.

    1981-01-01

    Emittance growth has long been a concern in linear accelerators, as has the idea that some kind of energy balance, or equipartitioning, between the degrees of freedom, would ameliorate the growth. M. Prome observed that the average transverse and longitudinal velocity spreads tend to equalize as current in the channel is increased, while the sum of the energy in the system stays nearly constant. However, only recently have we shown that an equipartitioning requirement on a bunched injected beam can indeed produce remarkably small emittance growth. The simple set of equations leading to this condition are outlined below. At the same time, Hofmann, using powerful analytical and computational methods, has investigated collective instabilities in transported beams and has identified thresholds and regions in parameter space where instabilities occur. This is an important generalization. Work that he will present at this conference shows that the results are essentially the same in r-z coordinates for transport systems, and evidence is presented that shows transport system boundaries to be quite accurate in computer simulations of accelerating systems also. Discussed are preliminary results of efforts to design accelerators that avoid parameter regions where emittance is affected by the instabilities identified by Hofmann. These efforts suggest that other mechanisms are present. The complicated behavior of the RFQ linac in this framework also is shown

  3. Linear Collider Physics Resource Book for Snowmass 2001, 3 Studies of Exotic and Standard Model Physics

    Abe, T.; Asner, D.; Baer, H.; Bagger, J.; Balazs, C.; Baltay, C.; Barker, T.; Barklow, T.; Barron, J.; Baur, U.; Beach, R.; Bellwied, R.; Bigi, I.; Blochinger, C.; Boege, S.; Bolton, T.; Bower, G.; Brau, J.; Breidenbach, M.; Brodsky, S.J.; Burke, D.; Burrows, P.; Butler, J.N.; Chakraborty, D.; Cheng, H.C.; Chertok, M.; Choi, S.Y.; Cinabro, D.; Corcella, G.; Cordero, R.K.; Danielson, N.; Davoudiasl, H.; Dawson, S.; Denner, A.; Derwent, P.; Diaz, M.A.; Dima, M.; Dittmaier, S.; Dixit, M.; Dixon, L.; Dobrescu, B.; Doncheski, M.A.; Duckwitz, M.; Dunn, J.; Early, J.; Erler, J.; Feng, J.L.; Ferretti, C.; Fisk, H.E.; Fraas, H.; Freitas, A.; Frey, R.; Gerdes, D.; Gibbons, L.; Godbole, R.; Godfrey, S.; Goodman, E.; Gopalakrishna, S.; Graf, N.; Grannis, P.D.; Gronberg, J.; Gunion, J.; Haber, H.E.; Han, T.; Hawkings, R.; Hearty, C.; Heinemeyer, S.; Hertzbach, S.S.; Heusch, C.; Hewett, J.; Hikasa, K.; Hiller, G.; Hoang, A.; Hollebeek, R.; Iwasaki, M.; Jacobsen, R.; Jaros, J.; Juste, A.; Kadyk, J.; Kalinowski, J.; Kalyniak, P.; Kamon, T.; Karlen, D.; Keller, L.; Koltick, D.; Kribs, G.; Kronfeld, A.; Leike, A.; Logan, H.E.; Lykken, J.; Macesanu, C.; Magill, S.; Marciano, W.; Markiewicz, T.W.; Martin, S.; Maruyama, T.; Matchev, K.; Moenig, K.; Montgomery, H.E.; Moortgat-Pick, G.; Moreau, G.; Mrenna, S.; Murakami, B.; Murayama, H.; Nauenberg, U.; Neal, H.; Newman, B.; Nojiri, M.; Orr, L.H.; Paige, F.; Para, A.; Pathak, S.; Peskin, M.E.; Plehn, T.; Porter, F.; Potter, C.; Prescott, C.; Rainwater, D.; Raubenheimer, T.; Repond, J.; Riles, K.; Rizzo, T.; Ronan, M.; Rosenberg, L.; Rosner, J.; Roth, M.; Rowson, P.; Schumm, B.; Seppala, L.; Seryi, A.; Siegrist, J.; Sinev, N.; Skulina, K.; Sterner, K.L.; Stewart, I.; Su, S.; Tata, X.; Telnov, V.; Teubner, T.; Tkaczyk, S.; Turcot, A.S.; van Bibber, K.; van Kooten, R.; Vega, R.; Wackeroth, D.; Wagner, D.; Waite, A.; Walkowiak, W.; Weiglein, G.; Wells, J.D.; W. Wester, III; Williams, B.; Wilson, G.; Wilson, R.; Winn, D.; Woods, M.; Wudka, J.; Yakovlev, O.; Yamamoto, H.; Yang, H.J.

    2001-01-01

    This Resource Book reviews the physics opportunities of a next-generation e+e- linear collider and discusses options for the experimental program. Part 3 reviews the possible experiments on that can be done at a linear collider on strongly coupled electroweak symmetry breaking, exotic particles, and extra dimensions, and on the top quark, QCD, and two-photon physics. It also discusses the improved precision electroweak measurements that this collider will make available.

  4. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Chang, Jinke; He, Jiankang; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen

    2018-01-01

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing. PMID:29361754

  5. Application of Next-generation Sequencing in Clinical Molecular Diagnostics

    Morteza Seifi

    2017-05-01

    Full Text Available ABSTRACT Next-generation sequencing (NGS is the catch all terms that used to explain several different modern sequencing technologies which let us to sequence nucleic acids much more rapidly and cheaply than the formerly used Sanger sequencing, and as such have revolutionized the study of molecular biology and genomics with excellent resolution and accuracy. Over the past years, many academic companies and institutions have continued technological advances to expand NGS applications from research to the clinic. In this review, the performance and technical features of current NGS platforms were described. Furthermore, advances in the applying of NGS technologies towards the progress of clinical molecular diagnostics were emphasized. General advantages and disadvantages of each sequencing system are summarized and compared to guide the selection of NGS platforms for specific research aims.

  6. Development of next-generation light water reactor

    Ishibashi, Fumihiko; Yasuoka, Makoto

    2010-01-01

    The Next-Generation Light Water Reactor Development Program, a national project in Japan, was inaugurated in April 2008. The primary objective of this program is to meet the need for the replacement of existing nuclear power plants in Japan after 2030. With the aim of setting a global standard design, the reactor to be developed offers greatly improved safety, reliability, and economic efficiency through several innovative technologies, including a reactor core system with uranium enrichment of 5 to 10%, a seismic isolation system, long-life materials, advanced water chemistry, innovative construction techniques, optimized passive and active safety systems, innovative digital technologies, and so on. In the first three years, a plant design concept with these innovative features is to be established and the effectiveness of the program will be reevaluated. The major part of the program will be completed in 2015. Toshiba is actively engaged in both design studies and technology development as a founding member of this program. (author)

  7. NREL-Prime Next-Generation Drivetrain Dynamometer Test Report

    Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Erdman, Bill [Cinch, Inc., Moraga, CA (United States); Blodgett, Douglas [DNV KEMA Renewables, Burlington, VT (United States); Halse, Christopher [Romax Technology, Boulder, CO (United States)

    2016-08-01

    Advances in wind turbine drivetrain technologies are necessary to improve reliability and reduce the cost of energy for land-based and offshore wind turbines. The NREL-Prime Next-Generation Drivetrain team developed a geared, medium-speed drivetrain that is lighter, more reliable and more efficient than existing designs. One of the objectives of Phase II of the project was to complete the detailed design, fabrication, and dynamometer testing of a 750 kilowatt (kW) drivetrain that includes the key gearbox innovations designed by Romax Technology and power converter innovations designed by DNV Kema Renewables. The purpose of this document is to summarize these tests completed in NREL's National Wind Technology Center 2.5 megawatt (MW) dynamometer.

  8. Advanced Material Strategies for Next-Generation Additive Manufacturing.

    Chang, Jinke; He, Jiankang; Mao, Mao; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen; Chua, Chee-Kai; Zhao, Xin

    2018-01-22

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  9. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Jinke Chang

    2018-01-01

    Full Text Available Additive manufacturing (AM has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  10. Production of the next-generation library virtual tour

    Duncan, James M.; Roth, Linda K.

    2001-01-01

    While many libraries offer overviews of their services through their Websites, only a small number of health sciences libraries provide Web-based virtual tours. These tours typically feature photographs of major service areas along with textual descriptions. This article describes the process for planning, producing, and implementing a next-generation virtual tour in which a variety of media elements are integrated: photographic images, 360-degree “virtual reality” views, textual descriptions, and contextual floor plans. Hardware and software tools used in the project are detailed, along with a production timeline and budget, tips for streamlining the process, and techniques for improving production. This paper is intended as a starting guide for other libraries considering an investment in such a project. PMID:11837254

  11. Neurogenetics: advancing the "next-generation" of brain research.

    Zoghbi, Huda Y; Warren, Stephen T

    2010-10-21

    There can be little doubt that genetics has transformed our understanding of mechanisms mediating brain disorders. The last two decades have brought tremendous progress in terms of accurate molecular diagnoses and knowledge of the genes and pathways that are involved in a large number of neurological and psychiatric disorders. Likewise, new methods and analytical approaches, including genome array studies and "next-generation" sequencing technologies, are bringing us deeper insights into the subtle complexities of the genetic architecture that determines our risks for these disorders. As we now seek to translate these discoveries back to clinical applications, a major challenge for the field will be in bridging the gap between genes and biology. In this Overview of Neuron's special review issue on neurogenetics, we reflect on progress made over the last two decades and highlight the challenges as well as the exciting opportunities for the future. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Quantifying population genetic differentiation from next-generation sequencing data

    Fumagalli, Matteo; Garrett Vieira, Filipe Jorge; Korneliussen, Thorfinn Sand

    2013-01-01

    method for quantifying population genetic differentiation from next-generation sequencing data. In addition, we present a strategy to investigate population structure via Principal Components Analysis. Through extensive simulations, we compare the new method herein proposed to approaches based...... on genotype calling and demonstrate a marked improvement in estimation accuracy for a wide range of conditions. We apply the method to a large-scale genomic data set of domesticated and wild silkworms sequenced at low coverage. We find that we can infer the fine-scale genetic structure of the sampled......Over the last few years, new high-throughput DNA sequencing technologies have dramatically increased speed and reduced sequencing costs. However, the use of these sequencing technologies is often challenged by errors and biases associated with the bioinformatical methods used for analyzing the data...

  13. Engineering microbes for tolerance to next-generation biofuels

    Dunlop Mary J

    2011-09-01

    Full Text Available Abstract A major challenge when using microorganisms to produce bulk chemicals such as biofuels is that the production targets are often toxic to cells. Many biofuels are known to reduce cell viability through damage to the cell membrane and interference with essential physiological processes. Therefore, cells must trade off biofuel production and survival, reducing potential yields. Recently, there have been several efforts towards engineering strains for biofuel tolerance. Promising methods include engineering biofuel export systems, heat shock proteins, membrane modifications, more general stress responses, and approaches that integrate multiple tolerance strategies. In addition, in situ recovery methods and media supplements can help to ease the burden of end-product toxicity and may be used in combination with genetic approaches. Recent advances in systems and synthetic biology provide a framework for tolerance engineering. This review highlights recent targeted approaches towards improving microbial tolerance to next-generation biofuels with a particular emphasis on strategies that will improve production.

  14. Next-Generation Sequencing and Genome Editing in Plant Virology

    Ahmed Hadidi

    2016-08-01

    Full Text Available Next-generation sequencing (NGS has been applied to plant virology since 2009. NGS provides highly efficient, rapid, low cost DNA or RNA high-throughput sequencing of the genomes of plant viruses and viroids and of the specific small RNAs generated during the infection process. These small RNAs, which cover frequently the whole genome of the infectious agent, are 21-24 nt long and are known as vsRNAs for viruses and vd-sRNAs for viroids. NGS has been used in a number of studies in plant virology including, but not limited to, discovery of novel viruses and viroids as well as detection and identification of those pathogens already known, analysis of genome diversity and evolution, and study of pathogen epidemiology. The genome engineering editing method, clustered regularly interspaced short palindromic repeats (CRISPR-Cas9 system has been successfully used recently to engineer resistance to DNA geminiviruses (family, Geminiviridae by targeting different viral genome sequences in infected Nicotiana benthamiana or Arabidopsis plants. The DNA viruses targeted include tomato yellow leaf curl virus and merremia mosaic virus (begomovirus; beet curly top virus and beet severe curly top virus (curtovirus; and bean yellow dwarf virus (mastrevirus. The technique has also been used against the RNA viruses zucchini yellow mosaic virus, papaya ringspot virus and turnip mosaic virus (potyvirus and cucumber vein yellowing virus (ipomovirus, family, Potyviridae by targeting the translation initiation genes eIF4E in cucumber or Arabidopsis plants. From these recent advances of major importance, it is expected that NGS and CRISPR-Cas technologies will play a significant role in the very near future in advancing the field of plant virology and connecting it with other related fields of biology.Keywords: Next-generation sequencing, NGS, plant virology, plant viruses, viroids, resistance to plant viruses by CRISPR-Cas9

  15. Efficient error correction for next-generation sequencing of viral amplicons.

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  16. The issue of threshold states

    Luck, L.

    1994-01-01

    The states which have not joined the Non-proliferation Treaty nor have undertaken any other internationally binding commitment not to develop or otherwise acquire nuclear weapons are considered a threshold states. Their nuclear status is rendered opaque as a conscious policy. Nuclear threshold status remains a key disarmament issue. For those few states, as India, Pakistan, Israel, who have put themselves in this position, the security returns have been transitory and largely illusory. The cost to them, and to the international community committed to the norm of non-proliferation, has been huge. The decisions which could lead to recovery from the situation in which they find themselves are essentially at their own hands. Whatever assistance the rest of international community is able to extend, it will need to be accompanied by a vital political signal

  17. Multiscalar production amplitudes beyond threshold

    Argyres, E N; Kleiss, R H

    1993-01-01

    We present exact tree-order amplitudes for $H^* \\to n~H$, for final states containing one or two particles with non-zero three-momentum, for various interaction potentials. We show that there are potentials leading to tree amplitudes that satisfy unitarity, not only at threshold but also in the above kinematical configurations and probably beyond. As a by-product, we also calculate $2\\to n$ tree amplitudes at threshold and show that for the unbroken $\\phi^4$ theory they vanish for $n>4~$, for the Standard Model Higgs they vanish for $n\\ge 3~$ and for a model potential, respecting tree-order unitarity, for $n$ even and $n>4~$. Finally, we calculate the imaginary part of the one-loop $1\\to n$ amplitude in both symmetric and spontaneously broken $\\phi^4$ theory.

  18. Nucleic acid reactivity: challenges for next-generation semiempirical quantum models.

    Huang, Ming; Giese, Timothy J; York, Darrin M

    2015-07-05

    Semiempirical quantum models are routinely used to study mechanisms of RNA catalysis and phosphoryl transfer reactions using combined quantum mechanical (QM)/molecular mechanical methods. Herein, we provide a broad assessment of the performance of existing semiempirical quantum models to describe nucleic acid structure and reactivity to quantify their limitations and guide the development of next-generation quantum models with improved accuracy. Neglect of diatomic differential overlap and self-consistent density-functional tight-binding semiempirical models are evaluated against high-level QM benchmark calculations for seven biologically important datasets. The datasets include: proton affinities, polarizabilities, nucleobase dimer interactions, dimethyl phosphate anion, nucleoside sugar and glycosidic torsion conformations, and RNA phosphoryl transfer model reactions. As an additional baseline, comparisons are made with several commonly used density-functional models, including M062X and B3LYP (in some cases with dispersion corrections). The results show that, among the semiempirical models examined, the AM1/d-PhoT model is the most robust at predicting proton affinities. AM1/d-PhoT and DFTB3-3ob/OPhyd reproduce the MP2 potential energy surfaces of 6 associative RNA phosphoryl transfer model reactions reasonably well. Further, a recently developed linear-scaling "modified divide-and-conquer" model exhibits the most accurate results for binding energies of both hydrogen bonded and stacked nucleobase dimers. The semiempirical models considered here are shown to underestimate the isotropic polarizabilities of neutral molecules by approximately 30%. The semiempirical models also fail to adequately describe torsion profiles for the dimethyl phosphate anion, the nucleoside sugar ring puckers, and the rotations about the nucleoside glycosidic bond. The modeling of pentavalent phosphorus, particularly with thio substitutions often used experimentally as mechanistic

  19. The H-mode power threshold in JET

    Start, D F.H.; Bhatnagar, V P; Campbell, D J; Cordey, J G; Esch, H P.L. de; Gormezano, C; Hawkes, N; Horton, L; Jones, T T.C.; Lomas, P J; Lowry, C; Righi, E; Rimini, F G; Saibene, G; Sartori, R; Sips, G; Stork, D; Thomas, P; Thomsen, K; Tubbing, B J.D.; Von Hellermann, M; Ward, D J [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking

    1994-07-01

    New H-mode threshold data over a range of toroidal field and density values have been obtained from the present campaign. The scaling with n{sub e} B{sub t} is almost identical with that of the 91/92 period for the same discharge conditions. The scaling with toroidal field alone gives somewhat higher thresholds than the older data. The 1991/2 database shows a scaling of P{sub th} (power threshold) with n{sub e} B{sub t} which is approximately linear and agrees well with that observed on other tokamaks. For NBI and carbon target tiles the threshold power is a factor of two higher with the ion {Nu}B drift away from the target compared with the value found with the drift towards the target. The combination of ICRH and beryllium tiles appears to be beneficial for reducing P{sub th}. The power threshold is largely insensitive to plasma current, X-point height and distance between the last closed flux surface and the limiter, at least for values greater than 2 cm. (authors). 3 refs., 6 figs.

  20. Evaluating the "Threshold Theory": Can Head Impact Indicators Help?

    Mihalik, Jason P; Lynall, Robert C; Wasserman, Erin B; Guskiewicz, Kevin M; Marshall, Stephen W

    2017-02-01

    This study aimed to determine the clinical utility of biomechanical head impact indicators by measuring the sensitivity, specificity, positive predictive value (PV+), and negative predictive value (PV-) of multiple thresholds. Head impact biomechanics (n = 283,348) from 185 football players in one Division I program were collected. A multidisciplinary clinical team independently made concussion diagnoses (n = 24). We dichotomized each impact using diagnosis (yes = 24, no = 283,324) and across a range of plausible impact indicator thresholds (10g increments beginning with a resultant linear head acceleration of 50g and ending with 120g). Some thresholds had adequate sensitivity, specificity, and PV-. All thresholds had low PV+, with the best recorded PV+ less than 0.4% when accounting for all head impacts sustained by our sample. Even when conservatively adjusting the frequency of diagnosed concussions by a factor of 5 to account for unreported/undiagnosed injuries, the PV+ of head impact indicators at any threshold was no greater than 1.94%. Although specificity and PV- appear high, the low PV+ would generate many unnecessary evaluations if these indicators were the sole diagnostic criteria. The clinical diagnostic value of head impact indicators is considerably questioned by these data. Notwithstanding, valid sensor technologies continue to offer objective data that have been used to improve player safety and reduce injury risk.

  1. Threshold Studies of the Microwave Instability in Electron Storage Rings

    Bane, Karl

    2010-01-01

    We use a Vlasov-Fokker-Planck program and a linearized Vlasov solver to study the microwave instability threshold of impedance models: (1) a Q = 1 resonator and (2) shielded coherent synchrotron radiation (CSR), and find the results of the two programs agree well. For shielded CSR we show that only two dimensionless parameters, the shielding parameter Π and the strength parameter S csr , are needed to describe the system. We further show that there is a strong instability associated with CSR, and that the threshold, to good approximation, is given by (S csr )th = 0.5 + 0.12Π. In particular, this means that shielding has little effect in stabilizing the beam for Π ∼ -3/2 . We, in addition, find another instability in the vicinity of Π = 0.7 with a lower threshold, (S csr ) th ∼ 0.2. We find that the threshold to this instability depends strongly on damping time, (S csr ) th ∼ τ p -1/2 , and that the tune spread at threshold is small - both hallmarks of a weak instability.

  2. Realistic Realizations Of Threshold Circuits

    Razavi, Hassan M.

    1987-08-01

    Threshold logic, in which each input is weighted, has many theoretical advantages over the standard gate realization, such as reducing the number of gates, interconnections, and power dissipation. However, because of the difficult synthesis procedure and complicated circuit implementation, their use in the design of digital systems is almost nonexistant. In this study, three methods of NMOS realizations are discussed, and their advantages and shortcomings are explored. Also, the possibility of using the methods to realize multi-valued logic is examined.

  3. Root finding with threshold circuits

    Jeřábek, Emil

    2012-01-01

    Roč. 462, Nov 30 (2012), s. 59-69 ISSN 0304-3975 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional support: RVO:67985840 Keywords : root finding * threshold circuit * power series Subject RIV: BA - General Mathematics Impact factor: 0.489, year: 2012 http://www.sciencedirect.com/science/article/pii/S0304397512008006#

  4. Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.

    Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew

    2018-01-01

    Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  5. Primordial black holes in linear and non-linear regimes

    Allahyari, Alireza; Abolhasani, Ali Akbar [Department of Physics, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2017-06-01

    We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we argue that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.

  6. Design proposal for door thresholds

    Smolka Radim

    2017-01-01

    Full Text Available Panels for openings in structures have always been an essential and integral part of buildings. Their importance in terms of a building´s functionality was not recognised. However, the general view on this issue has changed from focusing on big planar segments and critical details to sub-elements of these structures. This does not only focus on the forms of connecting joints but also on the supporting systems that keep the panels in the right position and ensure they function properly. One of the most strained segments is the threshold structure, especially the entrance door threshold structure. It is the part where substantial defects in construction occur in terms of waterproofing, as well as in the static, thermal and technical functions thereof. In conventional buildings, this problem is solved by pulling the floor structure under the entrance door structure and subsequently covering it with waterproofing material. This system cannot work effectively over the long term so local defects occur. A proposal is put forward to solve this problem by installing a sub-threshold door coupler made of composite materials. The coupler is designed so that its variability complies with the required parameters for most door structures on the European market.

  7. Color difference thresholds in dentistry.

    Paravina, Rade D; Ghinea, Razvan; Herrera, Luis J; Bona, Alvaro D; Igiel, Christopher; Linninger, Mercedes; Sakai, Maiko; Takahashi, Hidekazu; Tashkandi, Esam; Perez, Maria del Mar

    2015-01-01

    The aim of this prospective multicenter study was to determine 50:50% perceptibility threshold (PT) and 50:50% acceptability threshold (AT) of dental ceramic under simulated clinical settings. The spectral radiance of 63 monochromatic ceramic specimens was determined using a non-contact spectroradiometer. A total of 60 specimen pairs, divided into 3 sets of 20 specimen pairs (medium to light shades, medium to dark shades, and dark shades), were selected for psychophysical experiment. The coordinating center and seven research sites obtained the Institutional Review Board (IRB) approvals prior the beginning of the experiment. Each research site had 25 observers, divided into five groups of five observers: dentists-D, dental students-S, dental auxiliaries-A, dental technicians-T, and lay persons-L. There were 35 observers per group (five observers per group at each site ×7 sites), for a total of 175 observers. Visual color comparisons were performed using a viewing booth. Takagi-Sugeno-Kang (TSK) fuzzy approximation was used for fitting the data points. The 50:50% PT and 50:50% AT were determined in CIELAB and CIEDE2000. The t-test was used to evaluate the statistical significance in thresholds differences. The CIELAB 50:50% PT was ΔEab  = 1.2, whereas 50:50% AT was ΔEab  = 2.7. Corresponding CIEDE2000 (ΔE00 ) values were 0.8 and 1.8, respectively. 50:50% PT by the observer group revealed differences among groups D, A, T, and L as compared with 50:50% PT for all observers. The 50:50% AT for all observers was statistically different than 50:50% AT in groups T and L. A 50:50% perceptibility and ATs were significantly different. The same is true for differences between two color difference formulas ΔE00 /ΔEab . Observer groups and sites showed high level of statistical difference in all thresholds. Visual color difference thresholds can serve as a quality control tool to guide the selection of esthetic dental materials, evaluate clinical performance, and

  8. Bedding material affects mechanical thresholds, heat thresholds and texture preference

    Moehring, Francie; O’Hara, Crystal L.; Stucky, Cheryl L.

    2015-01-01

    It has long been known that the bedding type animals are housed on can affect breeding behavior and cage environment. Yet little is known about its effects on evoked behavior responses or non-reflexive behaviors. C57BL/6 mice were housed for two weeks on one of five bedding types: Aspen Sani Chips® (standard bedding for our institute), ALPHA-Dri®, Cellu-Dri™, Pure-o’Cel™ or TEK-Fresh. Mice housed on Aspen exhibited the lowest (most sensitive) mechanical thresholds while those on TEK-Fresh exhibited 3-fold higher thresholds. While bedding type had no effect on responses to punctate or dynamic light touch stimuli, TEK-Fresh housed animals exhibited greater responsiveness in a noxious needle assay, than those housed on the other bedding types. Heat sensitivity was also affected by bedding as animals housed on Aspen exhibited the shortest (most sensitive) latencies to withdrawal whereas those housed on TEK-Fresh had the longest (least sensitive) latencies to response. Slight differences between bedding types were also seen in a moderate cold temperature preference assay. A modified tactile conditioned place preference chamber assay revealed that animals preferred TEK-Fresh to Aspen bedding. Bedding type had no effect in a non-reflexive wheel running assay. In both acute (two day) and chronic (5 week) inflammation induced by injection of Complete Freund’s Adjuvant in the hindpaw, mechanical thresholds were reduced in all groups regardless of bedding type, but TEK-Fresh and Pure-o’Cel™ groups exhibited a greater dynamic range between controls and inflamed cohorts than Aspen housed mice. PMID:26456764

  9. Performance analysis of next-generation lunar laser retroreflectors

    Ciocci, Emanuele; Martini, Manuele; Contessa, Stefania; Porcelli, Luca; Mastrofini, Marco; Currie, Douglas; Delle Monache, Giovanni; Dell'Agnello, Simone

    2017-09-01

    Starting from 1969, Lunar Laser Ranging (LLR) to the Apollo and Lunokhod Cube Corner Retroreflectors (CCRs) provided several tests of General Relativity (GR). When deployed, the Apollo/Lunokhod CCRs design contributed only a negligible fraction of the ranging error budget. Today the improvement over the years in the laser ground stations makes the lunar libration contribution relevant. So the libration now dominates the error budget limiting the precision of the experimental tests of gravitational theories. The MoonLIGHT-2 project (Moon Laser Instrumentation for General relativity High-accuracy Tests - Phase 2) is a next-generation LLR payload developed by the Satellite/lunar/GNSS laser ranging/altimetry and Cube/microsat Characterization Facilities Laboratory (SCF _ Lab) at the INFN-LNF in collaboration with the University of Maryland. With its unique design consisting of a single large CCR unaffected by librations, MoonLIGHT-2 can significantly reduce error contribution of the reflectors to the measurement of the lunar geodetic precession and other GR tests compared to Apollo/Lunokhod CCRs. This paper treats only this specific next-generation lunar laser retroreflector (MoonLIGHT-2) and it is by no means intended to address other contributions to the global LLR error budget. MoonLIGHT-2 is approved to be launched with the Moon Express 1(MEX-1) mission and will be deployed on the Moon surface in 2018. To validate/optimize MoonLIGHT-2, the SCF _ Lab is carrying out a unique experimental test called SCF-Test: the concurrent measurement of the optical Far Field Diffraction Pattern (FFDP) and the temperature distribution of the CCR under thermal conditions produced with a close-match solar simulator and simulated space environment. The focus of this paper is to describe the SCF _ Lab specialized characterization of the performance of our next-generation LLR payload. While this payload will improve the contribution of the error budget of the space segment (MoonLIGHT-2

  10. Modeling of Volatility with Non-linear Time Series Model

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  11. Microbiome Selection Could Spur Next-Generation Plant Breeding Strategies.

    Gopal, Murali; Gupta, Alka

    2016-01-01

    " No plant is an island too …" Plants, though sessile, have developed a unique strategy to counter biotic and abiotic stresses by symbiotically co-evolving with microorganisms and tapping into their genome for this purpose. Soil is the bank of microbial diversity from which a plant selectively sources its microbiome to suit its needs. Besides soil, seeds, which carry the genetic blueprint of plants during trans-generational propagation, are home to diverse microbiota that acts as the principal source of microbial inoculum in crop cultivation. Overall, a plant is ensconced both on the outside and inside with a diverse assemblage of microbiota. Together, the plant genome and the genes of the microbiota that the plant harbors in different plant tissues, i.e., the 'plant microbiome,' form the holobiome which is now considered as unit of selection: 'the holobiont.' The 'plant microbiome' not only helps plants to remain fit but also offers critical genetic variability, hitherto, not employed in the breeding strategy by plant breeders, who traditionally have exploited the genetic variability of the host for developing high yielding or disease tolerant or drought resistant varieties. This fresh knowledge of the microbiome, particularly of the rhizosphere, offering genetic variability to plants, opens up new horizons for breeding that could usher in cultivation of next-generation crops depending less on inorganic inputs, resistant to insect pest and diseases and resilient to climatic perturbations. We surmise, from ever increasing evidences, that plants and their microbial symbionts need to be co-propagated as life-long partners in future strategies for plant breeding. In this perspective, we propose bottom-up approach to co-propagate the co-evolved, the plant along with the target microbiome, through - (i) reciprocal soil transplantation method, or (ii) artificial ecosystem selection method of synthetic microbiome inocula, or (iii) by exploration of microRNA transfer

  12. Optimizing Systems of Threshold Detection Sensors

    Banschbach, David C

    2008-01-01

    .... Below the threshold all signals are ignored. We develop a mathematical model for setting individual sensor thresholds to obtain optimal probability of detecting a significant event, given a limit on the total number of false positives allowed...

  13. 11 CFR 9036.1 - Threshold submission.

    2010-01-01

    ... credit or debit card, including one made over the Internet, the candidate shall provide sufficient... section shall not count toward the threshold amount. (c) Threshold certification by Commission. (1) After...

  14. Linearly constrained minimax optimization

    Madsen, Kaj; Schjær-Jacobsen, Hans

    1978-01-01

    We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...

  15. Nuclear thermodynamics below particle threshold

    Schiller, A.; Agvaanluvsan, U.; Algin, E.; Bagheri, A.; Chankova, R.; Guttormsen, M.; Hjorth-Jensen, M.; Rekstad, J.; Siem, S.; Sunde, A. C.; Voinov, A.

    2005-01-01

    From a starting point of experimentally measured nuclear level densities, we discuss thermodynamical properties of nuclei below the particle emission threshold. Since nuclei are essentially mesoscopic systems, a straightforward generalization of macroscopic ensemble theory often yields unphysical results. A careful critique of traditional thermodynamical concepts reveals problems commonly encountered in mesoscopic systems. One of which is the fact that microcanonical and canonical ensemble theory yield different results, another concerns the introduction of temperature for small, closed systems. Finally, the concept of phase transitions is investigated for mesoscopic systems

  16. Network Restoration for Next-Generation Communication and Computing Networks

    B. S. Awoyemi

    2018-01-01

    Full Text Available Network failures are undesirable but inevitable occurrences for most modern communication and computing networks. A good network design must be robust enough to handle sudden failures, maintain traffic flow, and restore failed parts of the network within a permissible time frame, at the lowest cost achievable and with as little extra complexity in the network as possible. Emerging next-generation (xG communication and computing networks such as fifth-generation networks, software-defined networks, and internet-of-things networks have promises of fast speeds, impressive data rates, and remarkable reliability. To achieve these promises, these complex and dynamic xG networks must be built with low failure possibilities, high network restoration capacity, and quick failure recovery capabilities. Hence, improved network restoration models have to be developed and incorporated in their design. In this paper, a comprehensive study on network restoration mechanisms that are being developed for addressing network failures in current and emerging xG networks is carried out. Open-ended problems are identified, while invaluable ideas for better adaptation of network restoration to evolving xG communication and computing paradigms are discussed.

  17. Next-generation phenomics for the Tree of Life.

    Burleigh, J Gordon; Alphonse, Kenzley; Alverson, Andrew J; Bik, Holly M; Blank, Carrine; Cirranello, Andrea L; Cui, Hong; Daly, Marymegan; Dietterich, Thomas G; Gasparich, Gail; Irvine, Jed; Julius, Matthew; Kaufman, Seth; Law, Edith; Liu, Jing; Moore, Lisa; O'Leary, Maureen A; Passarotti, Maria; Ranade, Sonali; Simmons, Nancy B; Stevenson, Dennis W; Thacker, Robert W; Theriot, Edward C; Todorovic, Sinisa; Velazco, Paúl M; Walls, Ramona L; Wolfe, Joanna M; Yu, Mengjie

    2013-06-26

    The phenotype represents a critical interface between the genome and the environment in which organisms live and evolve. Phenotypic characters also are a rich source of biodiversity data for tree building, and they enable scientists to reconstruct the evolutionary history of organisms, including most fossil taxa, for which genetic data are unavailable. Therefore, phenotypic data are necessary for building a comprehensive Tree of Life. In contrast to recent advances in molecular sequencing, which has become faster and cheaper through recent technological advances, phenotypic data collection remains often prohibitively slow and expensive. The next-generation phenomics project is a collaborative, multidisciplinary effort to leverage advances in image analysis, crowdsourcing, and natural language processing to develop and implement novel approaches for discovering and scoring the phenome, the collection of phentotypic characters for a species. This research represents a new approach to data collection that has the potential to transform phylogenetics research and to enable rapid advances in constructing the Tree of Life. Our goal is to assemble large phenomic datasets built using new methods and to provide the public and scientific community with tools for phenomic data assembly that will enable rapid and automated study of phenotypes across the Tree of Life.

  18. Machine learning and next-generation asteroid surveys

    Nugent, Carrie R.; Dailey, John; Cutri, Roc M.; Masci, Frank J.; Mainzer, Amy K.

    2017-10-01

    Next-generation surveys such as NEOCam (Mainzer et al., 2016) will sift through tens of millions of point source detections daily to detect and discover asteroids. This requires new, more efficient techniques to distinguish between solar system objects, background stars and galaxies, and artifacts such as cosmic rays, scattered light and diffraction spikes.Supervised machine learning is a set of algorithms that allows computers to classify data on a training set, and then apply that classification to make predictions on new datasets. It has been employed by a broad range of fields, including computer vision, medical diagnoses, economics, and natural language processing. It has also been applied to astronomical datasets, including transient identification in the Palomar Transient Factory pipeline (Masci et al., 2016), and in the Pan-STARRS1 difference imaging (D. E. Wright et al., 2015).As part of the NEOCam extended phase A work we apply machine learning techniques to the problem of asteroid detection. Asteroid detection is an ideal application of supervised learning, as there is a wealth of metrics associated with each extracted source, and suitable training sets are easily created. Using the vetted NEOWISE dataset (E. L. Wright et al., 2010, Mainzer et al., 2011) as a proof-of-concept of this technique, we applied the python package sklearn. We report on reliability, feature set selection, and the suitability of various algorithms.

  19. Next-Generation Dengue Vaccines: Novel Strategies Currently Under Development

    Anna P. Durbin

    2011-09-01

    Full Text Available Dengue has become the most important arboviral infection worldwide with more than 30 million cases of dengue fever estimated to occur each year. The need for a dengue vaccine is great and several live attenuated dengue candidate vaccines are proceeding through clinical evaluation. The need to induce a balanced immune response against all four DENV serotypes with a single vaccine has been a challenge for dengue vaccine developers. A live attenuated DENV chimeric vaccine produced by Sanofi Pasteur has recently entered Phase III evaluation in numerous dengue-endemic regions of the world. Viral interference between serotypes contained in live vaccines has required up to three doses of the vaccine be given over a 12-month period of time. For this reason, novel DENV candidate vaccines are being developed with the goal of achieving a protective immune response with an immunization schedule that can be given over the course of a few months. These next-generation candidates include DNA vaccines, recombinant adenovirus vectored vaccines, alphavirus replicons, and sub-unit protein vaccines. Several of these novel candidates will be discussed.

  20. Next-generation dengue vaccines: novel strategies currently under development.

    Durbin, Anna P; Whitehead, Stephen S

    2011-10-01

    Dengue has become the most important arboviral infection worldwide with more than 30 million cases of dengue fever estimated to occur each year. The need for a dengue vaccine is great and several live attenuated dengue candidate vaccines are proceeding through clinical evaluation. The need to induce a balanced immune response against all four DENV serotypes with a single vaccine has been a challenge for dengue vaccine developers. A live attenuated DENV chimeric vaccine produced by Sanofi Pasteur has recently entered Phase III evaluation in numerous dengue-endemic regions of the world. Viral interference between serotypes contained in live vaccines has required up to three doses of the vaccine be given over a 12-month period of time. For this reason, novel DENV candidate vaccines are being developed with the goal of achieving a protective immune response with an immunization schedule that can be given over the course of a few months. These next-generation candidates include DNA vaccines, recombinant adenovirus vectored vaccines, alphavirus replicons, and sub-unit protein vaccines. Several of these novel candidates will be discussed.

  1. The Next-Generation Very Large Array: Technical Overview

    McKinnon, Mark; Selina, Rob

    2018-01-01

    As part of its mandate as a national observatory, the NRAO is looking toward the long range future of radio astronomy and fostering the long term growth of the US astronomical community. NRAO has sponsored a series of science and technical community meetings to consider the science mission and design of a next-generation Very Large Array (ngVLA), building on the legacies of the Atacama Large Millimeter/submillimeter Array (ALMA) and the Very Large Array (VLA).The basic ngVLA design emerging from these discussions is an interferometric array with approximately ten times the sensitivity and ten times higher spatial resolution than the VLA and ALMA radio telescopes, optimized for operation in the wavelength range 0.3cm to 3cm. The ngVLA would open a new window on the Universe through ultra-sensitive imaging of thermal line and continuum emission down to milli-arcsecond resolution, as well as unprecedented broadband continuum polarimetric imaging of non-thermal processes. The specifications and concepts for major ngVLA system elements are rapidly converging.We will provide an overview of the current system design of the ngVLA. The concepts for major system elements such as the antenna, receiving electronics, and central signal processing will be presented. We will also describe the major development activities that are presently underway to advance the design.

  2. Analysis Tools for Next-Generation Hadron Spectroscopy Experiments

    Battaglieri, M.; Briscoe, B. J.; Celentano, A.; Chung, S.-U.; D'Angelo, A.; De Vita, R.; Döring, M.; Dudek, J.; Eidelman, S.; Fegan, S.; Ferretti, J.; Filippi, A.; Fox, G.; Galata, G.; García-Tecocoatzi, H.; Glazier, D. I.; Grube, B.; Hanhart, C.; Hoferichter, M.; Hughes, S. M.; Ireland, D. G.; Ketzer, B.; Klein, F. J.; Kubis, B.; Liu, B.; Masjuan, P.; Mathieu, V.; McKinnon, B.; Mitchel, R.; Nerling, F.; Paul, S.; Peláez, J. R.; Rademacker, J.; Rizzo, A.; Salgado, C.; Santopinto, E.; Sarantsev, A. V.; Sato, T.; Schlüter, T.; [Silva]da Silva, M. L. L.; Stankovic, I.; Strakovsky, I.; Szczepaniak, A.; Vassallo, A.; Walford, N. K.; Watts, D. P.; Zana, L.

    The series of workshops on New Partial-Wave Analysis Tools for Next-Generation Hadron Spectroscopy Experiments was initiated with the ATHOS 2012 meeting, which took place in Camogli, Italy, June 20-22, 2012. It was followed by ATHOS 2013 in Kloster Seeon near Munich, Germany, May 21-24, 2013. The third, ATHOS3, meeting is planned for April 13-17, 2015 at The George Washington University Virginia Science and Technology Campus, USA. The workshops focus on the development of amplitude analysis tools for meson and baryon spectroscopy, and complement other programs in hadron spectroscopy organized in the recent past including the INT-JLab Workshop on Hadron Spectroscopy in Seattle in 2009, the International Workshop on Amplitude Analysis in Hadron Spectroscopy at the ECT*-Trento in 2011, the School on Amplitude Analysis in Modern Physics in Bad Honnef in 2011, the Jefferson Lab Advanced Study Institute Summer School in 2012, and the School on Concepts of Modern Amplitude Analysis Techniques in Flecken-Zechlin near Berlin in September 2013. The aim of this document is to summarize the discussions that took place at the ATHOS 2012 and ATHOS 2013 meetings. We do not attempt a comprehensive review of the field of amplitude analysis, but offer a collection of thoughts that we hope may lay the ground for such a document.

  3. Analysis Tools for Next-Generation Hadron Spectroscopy Experiments

    Battaglieri, Marco; Briscoe, William; Celentano, Andrea; Chung, Suh-Urk; D'Angelo, Annalisa; De Vita, Rafaella; Döring, Michael; Dudek, Jozef; Eidelman, S.; Fegan, Stuart; Ferretti, J.; Filippi, A.; Fox, G.; Galata, G.; Garcia-Tecocoatzi, H.; Glazier, Derek; Grube, B.; Hanhart, C.; Hoferichter, M.; Hughes, S. M.; Ireland, David G.; Ketzer, B.; Klein, Franz J.; Kubis, B.; Liu, B.; Masjuan, P.; Mathieu, Vincent; McKinnon, Brian; Mitchel, R.; Nerling, F.; Paul, S.; Peláez, J. R.; Rademacker, J.; Rizzo, Alessandro; Salgado, Carlos; Santopinto, E.; Sarantsev, Andrey V.; Sato, Toru; Schlüter, T.; Da Silva, M. L.L.; Stankovic, I.; Strakovsky, Igor; Szczepaniak, Adam; Vassallo, A.; Walford, Natalie K.; Watts, Daniel P.

    2015-01-01

    The series of workshops on New Partial-Wave Analysis Tools for Next-Generation Hadron Spectroscopy Experiments was initiated with the ATHOS 2012 meeting, which took place in Camogli, Italy, June 20-22, 2012. It was followed by ATHOS 2013 in Kloster Seeon near Munich, Germany, May 21-24, 2013. The third, ATHOS3, meeting is planned for April 13-17, 2015 at The George Washington University Virginia Science and Technology Campus, USA. The workshops focus on the development of amplitude analysis tools for meson and baryon spectroscopy, and complement other programs in hadron spectroscopy organized in the recent past including the INT-JLab Workshop on Hadron Spectroscopy in Seattle in 2009, the International Workshop on Amplitude Analysis in Hadron Spectroscopy at the ECT*-Trento in 2011, the School on Amplitude Analysis in Modern Physics in Bad Honnef in 2011, the Jefferson Lab Advanced Study Institute Summer School in 2012, and the School on Concepts of Modern Amplitude Analysis Techniques in Flecken-Zechlin near Berlin in September 2013. The aim of this document is to summarize the discussions that took place at the ATHOS 2012 and ATHOS 2013 meetings. We do not attempt a comprehensive review of the field of amplitude analysis, but offer a collection of thoughts that we hope may lay the ground for such a document

  4. Next-generation sequence analysis of cancer xenograft models.

    Fernando J Rossello

    Full Text Available Next-generation sequencing (NGS studies in cancer are limited by the amount, quality and purity of tissue samples. In this situation, primary xenografts have proven useful preclinical models. However, the presence of mouse-derived stromal cells represents a technical challenge to their use in NGS studies. We examined this problem in an established primary xenograft model of small cell lung cancer (SCLC, a malignancy often diagnosed from small biopsy or needle aspirate samples. Using an in silico strategy that assign reads according to species-of-origin, we prospectively compared NGS data from primary xenograft models with matched cell lines and with published datasets. We show here that low-coverage whole-genome analysis demonstrated remarkable concordance between published genome data and internal controls, despite the presence of mouse genomic DNA. Exome capture sequencing revealed that this enrichment procedure was highly species-specific, with less than 4% of reads aligning to the mouse genome. Human-specific expression profiling with RNA-Seq replicated array-based gene expression experiments, whereas mouse-specific transcript profiles correlated with published datasets from human cancer stroma. We conclude that primary xenografts represent a useful platform for complex NGS analysis in cancer research for tumours with limited sample resources, or those with prominent stromal cell populations.

  5. Toward green next-generation passive optical networks

    Srivastava, Anand

    2015-01-01

    Energy efficiency has become an increasingly important aspect of designing access networks, due to both increased concerns for global warming and increased network costs related to energy consumption. Comparing access, metro, and core, the access constitutes a substantial part of the per subscriber network energy consumption and is regarded as the bottleneck for increased network energy efficiency. One of the main opportunities for reducing network energy consumption lies in efficiency improvements of the customer premises equipment. Access networks in general are designed for low utilization while supporting high peak access rates. The combination of large contribution to overall network power consumption and low Utilization implies large potential for CPE power saving modes where functionality is powered off during periods of idleness. Next-generation passive optical network, which is considered one of the most promising optical access networks, has notably matured in the past few years and is envisioned to massively evolve in the near future. This trend will increase the power requirements of NG-PON and make it no longer coveted. This paper will first provide a comprehensive survey of the previously reported studies on tackling this problem. A novel solution framework is then introduced, which aims to explore the maximum design dimensions and achieve the best possible power saving while maintaining the QoS requirements for each type of service.

  6. Rucio, the next-generation Data Management system in ATLAS

    Serfon, C.; Barisits, M.; Beermann, T.; Garonne, V.; Goossens, L.; Lassnig, M.; Nairz, A.; Vigne, R.; ATLAS Collaboration

    2016-04-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and ;Big Data; computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quixote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  7. Rucio, the next-generation Data Management system in ATLAS

    Serfon, C; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Vigne, R

    2016-01-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  8. Rucio, the next-generation Data Management system in ATLAS

    Serfon, C; The ATLAS collaboration; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Vigne, R

    2014-01-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. In this talk, we will present the history of the DDM project and the experience of data management operation in ATLAS computing. Thus, We will show the key concepts of Rucio, including its data organization. The Rucio design, and the technology it e...

  9. SAMSIN: the next-generation servo-manipulator

    Adams, R.H.; Jennrich, C.E.; Korpi, K.W.

    1985-01-01

    The Central Research Laboratories (CRL) Division of Sargent Industries is now developing SAMSIN, a next-generation servo-manipulator. SAMSIN is an acronym for Servo-Actuated Manipulator Systems with Intelligent Networks. This paper discusses the objectives of this development and describes the key features of the servo-manipulator system. There are three main objectives in the SAMSIN development: adaptability, reliability, and maintainability. SAMSIN utilizes standard Sargent/CRL sealed master and slave manipulator arms as well as newly developed compact versions. The mechanical arms have more than 20 yr of successful performance in industrial applications such as hot cells, high vacuums, fuel pools, and explosives handling. The servo-actuator package is in a protective enclosure, which may be sealed in various ways from the remote environment. The force limiting characteristics of the servo-actuators extend motion tendon life. Protective bootings increase the reliability of the arms in an environment that is high in airborne contamination. These bootings also simplify the decontamination of the system. The modularity in construction permits quick removal and replacement of slave arms, wrist joints, tong fingers, and actuator packages for maintenance. SAMSIN utilizes readily available off-the-shelf actuator and control system components. Each manipulator motion uses the same actuator and control system components

  10. Targeted next-generation sequencing in monogenic dyslipidemias.

    Hegele, Robert A; Ban, Matthew R; Cao, Henian; McIntyre, Adam D; Robinson, John F; Wang, Jian

    2015-04-01

    To evaluate the potential clinical translation of high-throughput next-generation sequencing (NGS) methods in diagnosis and management of dyslipidemia. Recent NGS experiments indicate that most causative genes for monogenic dyslipidemias are already known. Thus, monogenic dyslipidemias can now be diagnosed using targeted NGS. Targeting of dyslipidemia genes can be achieved by either: designing custom reagents for a dyslipidemia-specific NGS panel; or performing genome-wide NGS and focusing on genes of interest. Advantages of the former approach are lower cost and limited potential to detect incidental pathogenic variants unrelated to dyslipidemia. However, the latter approach is more flexible because masking criteria can be altered as knowledge advances, with no need for re-design of reagents or follow-up sequencing runs. Also, the cost of genome-wide analysis is decreasing and ethical concerns can likely be mitigated. DNA-based diagnosis is already part of the clinical diagnostic algorithms for familial hypercholesterolemia. Furthermore, DNA-based diagnosis is supplanting traditional biochemical methods to diagnose chylomicronemia caused by deficiency of lipoprotein lipase or its co-factors. The increasing availability and decreasing cost of clinical NGS for dyslipidemia means that its potential benefits can now be evaluated on a larger scale.

  11. Heterogeneous next-generation wireless network interference model-and its applications

    Mahmood, Nurul Huda; Yilmaz, Ferkan; Alouini, Mohamed-Slim; Ø ien, Geir Egil

    2014-01-01

    Next-generation wireless systems facilitating better utilisation of the scarce radio spectrum have emerged as a response to inefficient and rigid spectrum assignment policies. These are comprised of intelligent radio nodes that opportunistically

  12. NGSUtils: a software suite for analyzing and manipulating next-generation sequencing datasets

    Breese, Marcus R.; Liu, Yunlong

    2013-01-01

    Summary: NGSUtils is a suite of software tools for manipulating data common to next-generation sequencing experiments, such as FASTQ, BED and BAM format files. These tools provide a stable and modular platform for data management and analysis.

  13. Polymer Derived Yttrium Silicate Ablative TPS Materials for Next-Generation Exploration Missions, Phase I

    National Aeronautics and Space Administration — Through the proposed NASA SBIR program, NanoSonic will optimize its HybridSil® derived yttrium silicates to serve as next-generation reinforcement for carbon and...

  14. Next-Generation Ultra-Compact Stowage/Lightweight Solar Array System, Phase I

    National Aeronautics and Space Administration — Deployable Space Systems, Inc. (DSS) has developed a next-generation high performance solar array system that has game-changing performance metrics in terms of...

  15. Clinical utility of a 377 gene custom next-generation sequencing ...

    JEN BEVILACQUA

    2017-07-26

    Jul 26, 2017 ... Clinical utility of a 377 gene custom next-generation sequencing epilepsy panel ... number of genes, making it a very attractive option for a condition as .... clinical value of various test offerings to guide decision making.

  16. Next-Generation Thermal Infrared Multi-Body Radiometer Experiment (TIMBRE)

    Kenyon, M.; Mariani, G.; Johnson, B.; Brageot, E.; Hayne, P.

    2016-10-01

    We have developed an instrument concept called TIMBRE which belongs to the important class of instruments called thermal imaging radiometers (TIRs). TIMBRE is the next-generation TIR with unparalleled performance compared to the state-of-the-art.

  17. Compositional threshold for Nuclear Waste Glass Durability

    Kruger, Albert A.; Farooqi, Rahmatullah; Hrma, Pavel R.

    2013-01-01

    Within the composition space of glasses, a distinct threshold appears to exist that separates 'good' glasses, i.e., those which are sufficiently durable, from 'bad' glasses of a low durability. The objective of our research is to clarify the origin of this threshold by exploring the relationship between glass composition, glass structure and chemical durability around the threshold region

  18. Threshold Concepts in Finance: Student Perspectives

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-01-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by…

  19. Epidemic threshold in directed networks

    Li, Cong; Wang, Huijuan; Van Mieghem, Piet

    2013-12-01

    Epidemics have so far been mostly studied in undirected networks. However, many real-world networks, such as the online social network Twitter and the world wide web, on which information, emotion, or malware spreads, are directed networks, composed of both unidirectional links and bidirectional links. We define the directionality ξ as the percentage of unidirectional links. The epidemic threshold τc for the susceptible-infected-susceptible (SIS) epidemic is lower bounded by 1/λ1 in directed networks, where λ1, also called the spectral radius, is the largest eigenvalue of the adjacency matrix. In this work, we propose two algorithms to generate directed networks with a given directionality ξ. The effect of ξ on the spectral radius λ1, principal eigenvector x1, spectral gap (λ1-λ2), and algebraic connectivity μN-1 is studied. Important findings are that the spectral radius λ1 decreases with the directionality ξ, whereas the spectral gap and the algebraic connectivity increase with the directionality ξ. The extent of the decrease of the spectral radius depends on both the degree distribution and the degree-degree correlation ρD. Hence, in directed networks, the epidemic threshold is larger and a random walk converges to its steady state faster than that in undirected networks with the same degree distribution.

  20. Computational gestalts and perception thresholds.

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  1. Threshold enhancement of diphoton resonances

    Bharucha, Aoife; Goudelis, Andreas

    2016-10-10

    The data collected by the LHC collaborations at an energy of 13 TeV indicates the presence of an excess in the diphoton spectrum that would correspond to a resonance of a 750 GeV mass. The apparently large production cross section is nevertheless very difficult to explain in minimal models. We consider the possibility that the resonance is a pseudoscalar boson $A$ with a two--photon decay mediated by a charged and uncolored fermion having a mass at the $\\frac12 M_A$ threshold and a very small decay width, $\\ll 1$ MeV; one can then generate a large enhancement of the $A\\gamma\\gamma$ amplitude which explains the excess without invoking a large multiplicity of particles propagating in the loop, large electric charges and/or very strong Yukawa couplings. The implications of such a threshold enhancement are discussed in two explicit scenarios: i) the Minimal Supersymmetric Standard Model in which the $A$ state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through...

  2. Next-Generation Sequencing in Neuropathologic Diagnosis of Infections of the Nervous System (Open Access)

    2016-06-13

    nervous system ABSTRACT Objective: To determine the feasibility of next-generation sequencing (NGS) microbiome ap- proaches in the diagnosis of infectious...V, van Doorn HR, Nghia HD, et al. Identification of a new cyclovirus in cerebrospinal fluid of patients with acute central nervous system infections...Kumar, et al. system Next-generation sequencing in neuropathologic diagnosis of infections of the nervous This information is current as of June 13

  3. Foundations of linear and generalized linear models

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  4. Thresholding of auditory cortical representation by background noise

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  5. Thresholding of auditory cortical representation by background noise.

    Liang, Feixue; Bai, Lin; Tao, Huizhong W; Zhang, Li I; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.

  6. Disaggregated energy consumption and GDP in Taiwan: A threshold co-integration analysis

    Hu, J.-L.; Lin, C.-H.

    2008-01-01

    Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan

  7. Regional rainfall thresholds for landslide occurrence using a centenary database

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia

    2017-04-01

    Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was

  8. NASA Fluid Lensing & MiDAR: Next-Generation Remote Sensing Technologies for Aquatic Remote Sensing

    Chirayath, Ved

    2018-01-01

    We present two recent instrument technology developments at NASA, Fluid Lensing and MiDAR, and their application to remote sensing of Earth's aquatic systems. Fluid Lensing is the first remote sensing technology capable of imaging through ocean waves in 3D at sub-cm resolutions. MiDAR is a next-generation active hyperspectral remote sensing and optical communications instrument capable of active fluid lensing. Fluid Lensing has been used to provide 3D multispectral imagery of shallow marine systems from unmanned aerial vehicles (UAVs, or drones), including coral reefs in American Samoa and stromatolite reefs in Hamelin Pool, Western Australia. MiDAR is being deployed on aircraft and underwater remotely operated vehicles (ROVs) to enable a new method for remote sensing of living and nonliving structures in extreme environments. MiDAR images targets with high-intensity narrowband structured optical radiation to measure an objectâ€"TM"s non-linear spectral reflectance, image through fluid interfaces such as ocean waves with active fluid lensing, and simultaneously transmit high-bandwidth data. As an active instrument, MiDAR is capable of remotely sensing reflectance at the centimeter (cm) spatial scale with a signal-to-noise ratio (SNR) multiple orders of magnitude higher than passive airborne and spaceborne remote sensing systems with significantly reduced integration time. This allows for rapid video-frame-rate hyperspectral sensing into the far ultraviolet and VNIR wavelengths. Previously, MiDAR was developed into a TRL 2 laboratory instrument capable of imaging in thirty-two narrowband channels across the VNIR spectrum (400-950nm). Recently, MiDAR UV was raised to TRL4 and expanded to include five ultraviolet bands from 280-400nm, permitting UV remote sensing capabilities in UV A, B, and C bands and enabling mineral identification and stimulated fluorescence measurements of organic proteins and compounds, such as green fluorescent proteins in terrestrial and

  9. The construction of next-generation matrices for compartmental epidemic models.

    Diekmann, O; Heesterbeek, J A P; Roberts, M G

    2010-06-06

    The basic reproduction number (0) is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of (0) where finitely many different categories of individuals are recognized. We clear up confusion that has been around in the literature concerning the construction of this matrix, specifically for the most frequently used so-called compartmental models. We present a detailed easy recipe for the construction of the NGM from basic ingredients derived directly from the specifications of the model. We show that two related matrices exist which we define to be the NGM with large domain and the NGM with small domain. The three matrices together reflect the range of possibilities encountered in the literature for the characterization of (0). We show how they are connected and how their construction follows from the basic model ingredients, and establish that they have the same non-zero eigenvalues, the largest of which is the basic reproduction number (0). Although we present formal recipes based on linear algebra, we encourage the construction of the NGM by way of direct epidemiological reasoning, using the clear interpretation of the elements of the NGM and of the model ingredients. We present a selection of examples as a practical guide to our methods. In the appendix we present an elementary but complete proof that (0) defined as the dominant eigenvalue of the NGM for compartmental systems and the Malthusian parameter r, the real-time exponential growth rate in the early phase of an outbreak, are connected by the properties that (0) > 1 if and only if r > 0, and (0) = 1 if and only if r = 0.

  10. Removing Malmquist bias from linear regressions

    Verter, Frances

    1993-01-01

    Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

  11. ABrowse--a customizable next-generation genome browser framework.

    Kong, Lei; Wang, Jun; Zhao, Shuqi; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge

    2012-01-05

    With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for Arabidopsis thaliana genome has been built at http://arabidopsis.cbi.edu.cn/.

  12. ABrowse - a customizable next-generation genome browser framework

    2012-01-01

    Background With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Results Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. Conclusions ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for Arabidopsis thaliana genome

  13. SNAP: Small Next-generation Atmospheric Probe Concept

    Sayanagi, K. M.; Dillman, R. A.; Atkinson, D. H.; Li, J.; Saikia, S.; Simon, A. A.; Spilker, T. R.; Wong, M. H.; Hope, D.

    2017-12-01

    We present a concept for a small, atmospheric probe that could be flexibly added to future missions that orbit or fly-by a giant planet as a secondary payload, which we call the Small Next-generation Atmospheric Probe (SNAP). SNAP's main scientific objectives are to determine the vertical distribution of clouds and cloud-forming chemical species, thermal stratification, and wind speed as a function of depth. As a case study, we present the advantages, cost and risk of adding SNAP to the future Uranus Orbiter and Probe flagship mission; in combination with the mission's main probe, SNAP would perform atmospheric in-situ measurements at a second location, and thus enable and enhance the scientific objectives recommended by the 2013 Planetary Science Decadal Survey and the 2014 NASA Science Plan to determine atmospheric spatial variabilities. We envision that the science objectives can be achieved with a 30-kg entry probe 0.5m in diameter (less than half the size of the Galileo probe) that reaches 5-bar pressure-altitude and returns data to Earth via the carrier spacecraft. As the baseline instruments, the probe will carry an Atmospheric Structure Instrument (ASI) that measures the temperature, pressure and acceleration, a carbon nanotube-based NanoChem atmospheric composition sensor, and an Ultra-Stable Oscillator (USO) to conduct a Doppler Wind Experiment (DWE). We also catalog promising technologies currently under development that will strengthen small atmospheric entry probe missions in the future. While SNAP is applicable to multiple planets, we examine the feasibility, benefits and impacts of adding SNAP to the Uranus Orbiter and Probe flagship mission. Our project is supported by NASA PSDS3 grant NNX17AK31G.

  14. Visual programming for next-generation sequencing data analytics.

    Milicchio, Franco; Rose, Rebecca; Bian, Jiang; Min, Jae; Prosperi, Mattia

    2016-01-01

    High-throughput or next-generation sequencing (NGS) technologies have become an established and affordable experimental framework in biological and medical sciences for all basic and translational research. Processing and analyzing NGS data is challenging. NGS data are big, heterogeneous, sparse, and error prone. Although a plethora of tools for NGS data analysis has emerged in the past decade, (i) software development is still lagging behind data generation capabilities, and (ii) there is a 'cultural' gap between the end user and the developer. Generic software template libraries specifically developed for NGS can help in dealing with the former problem, whilst coupling template libraries with visual programming may help with the latter. Here we scrutinize the state-of-the-art low-level software libraries implemented specifically for NGS and graphical tools for NGS analytics. An ideal developing environment for NGS should be modular (with a native library interface), scalable in computational methods (i.e. serial, multithread, distributed), transparent (platform-independent), interoperable (with external software interface), and usable (via an intuitive graphical user interface). These characteristics should facilitate both the run of standardized NGS pipelines and the development of new workflows based on technological advancements or users' needs. We discuss in detail the potential of a computational framework blending generic template programming and visual programming that addresses all of the current limitations. In the long term, a proper, well-developed (although not necessarily unique) software framework will bridge the current gap between data generation and hypothesis testing. This will eventually facilitate the development of novel diagnostic tools embedded in routine healthcare.

  15. Authentication of Herbal Supplements Using Next-Generation Sequencing.

    Natalia V Ivanova

    Full Text Available DNA-based testing has been gaining acceptance as a tool for authentication of a wide range of food products; however, its applicability for testing of herbal supplements remains contentious.We utilized Sanger and Next-Generation Sequencing (NGS for taxonomic authentication of fifteen herbal supplements representing three different producers from five medicinal plants: Echinacea purpurea, Valeriana officinalis, Ginkgo biloba, Hypericum perforatum and Trigonella foenum-graecum. Experimental design included three modifications of DNA extraction, two lysate dilutions, Internal Amplification Control, and multiple negative controls to exclude background contamination. Ginkgo supplements were also analyzed using HPLC-MS for the presence of active medicinal components.All supplements yielded DNA from multiple species, rendering Sanger sequencing results for rbcL and ITS2 regions either uninterpretable or non-reproducible between the experimental replicates. Overall, DNA from the manufacturer-listed medicinal plants was successfully detected in seven out of eight dry herb form supplements; however, low or poor DNA recovery due to degradation was observed in most plant extracts (none detected by Sanger; three out of seven-by NGS. NGS also revealed a diverse community of fungi, known to be associated with live plant material and/or the fermentation process used in the production of plant extracts. HPLC-MS testing demonstrated that Ginkgo supplements with degraded DNA contained ten key medicinal components.Quality control of herbal supplements should utilize a synergetic approach targeting both DNA and bioactive components, especially for standardized extracts with degraded DNA. The NGS workflow developed in this study enables reliable detection of plant and fungal DNA and can be utilized by manufacturers for quality assurance of raw plant materials, contamination control during the production process, and the final product. Interpretation of results should

  16. ABrowse - a customizable next-generation genome browser framework

    Kong Lei

    2012-01-01

    Full Text Available Abstract Background With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Results Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. Conclusions ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for

  17. High-Throughput Next-Generation Sequencing of Polioviruses

    Montmayeur, Anna M.; Schmidt, Alexander; Zhao, Kun; Magaña, Laura; Iber, Jane; Castro, Christina J.; Chen, Qi; Henderson, Elizabeth; Ramos, Edward; Shaw, Jing; Tatusov, Roman L.; Dybdahl-Sissoko, Naomi; Endegue-Zanga, Marie Claire; Adeniji, Johnson A.; Oberste, M. Steven; Burns, Cara C.

    2016-01-01

    ABSTRACT The poliovirus (PV) is currently targeted for worldwide eradication and containment. Sanger-based sequencing of the viral protein 1 (VP1) capsid region is currently the standard method for PV surveillance. However, the whole-genome sequence is sometimes needed for higher resolution global surveillance. In this study, we optimized whole-genome sequencing protocols for poliovirus isolates and FTA cards using next-generation sequencing (NGS), aiming for high sequence coverage, efficiency, and throughput. We found that DNase treatment of poliovirus RNA followed by random reverse transcription (RT), amplification, and the use of the Nextera XT DNA library preparation kit produced significantly better results than other preparations. The average viral reads per total reads, a measurement of efficiency, was as high as 84.2% ± 15.6%. PV genomes covering >99 to 100% of the reference length were obtained and validated with Sanger sequencing. A total of 52 PV genomes were generated, multiplexing as many as 64 samples in a single Illumina MiSeq run. This high-throughput, sequence-independent NGS approach facilitated the detection of a diverse range of PVs, especially for those in vaccine-derived polioviruses (VDPV), circulating VDPV, or immunodeficiency-related VDPV. In contrast to results from previous studies on other viruses, our results showed that filtration and nuclease treatment did not discernibly increase the sequencing efficiency of PV isolates. However, DNase treatment after nucleic acid extraction to remove host DNA significantly improved the sequencing results. This NGS method has been successfully implemented to generate PV genomes for molecular epidemiology of the most recent PV isolates. Additionally, the ability to obtain full PV genomes from FTA cards will aid in facilitating global poliovirus surveillance. PMID:27927929

  18. Authentication of Herbal Supplements Using Next-Generation Sequencing.

    Ivanova, Natalia V; Kuzmina, Maria L; Braukmann, Thomas W A; Borisenko, Alex V; Zakharov, Evgeny V

    2016-01-01

    DNA-based testing has been gaining acceptance as a tool for authentication of a wide range of food products; however, its applicability for testing of herbal supplements remains contentious. We utilized Sanger and Next-Generation Sequencing (NGS) for taxonomic authentication of fifteen herbal supplements representing three different producers from five medicinal plants: Echinacea purpurea, Valeriana officinalis, Ginkgo biloba, Hypericum perforatum and Trigonella foenum-graecum. Experimental design included three modifications of DNA extraction, two lysate dilutions, Internal Amplification Control, and multiple negative controls to exclude background contamination. Ginkgo supplements were also analyzed using HPLC-MS for the presence of active medicinal components. All supplements yielded DNA from multiple species, rendering Sanger sequencing results for rbcL and ITS2 regions either uninterpretable or non-reproducible between the experimental replicates. Overall, DNA from the manufacturer-listed medicinal plants was successfully detected in seven out of eight dry herb form supplements; however, low or poor DNA recovery due to degradation was observed in most plant extracts (none detected by Sanger; three out of seven-by NGS). NGS also revealed a diverse community of fungi, known to be associated with live plant material and/or the fermentation process used in the production of plant extracts. HPLC-MS testing demonstrated that Ginkgo supplements with degraded DNA contained ten key medicinal components. Quality control of herbal supplements should utilize a synergetic approach targeting both DNA and bioactive components, especially for standardized extracts with degraded DNA. The NGS workflow developed in this study enables reliable detection of plant and fungal DNA and can be utilized by manufacturers for quality assurance of raw plant materials, contamination control during the production process, and the final product. Interpretation of results should involve an

  19. Microbiome selection could spur next-generation plant breeding strategies

    Murali Gopal

    2016-12-01

    Full Text Available Plants, though sessile, have developed a unique strategy to counter biotic and abiotic stresses by symbiotically co-evolving with microorganisms and tapping into their genome for this purpose. Soil is the bank of microbial diversity from which a plant selectively sources its microbiome to suit its needs. Besides soil, seeds, which carry the genetic blueprint of plants during trans-generational propagation, are home to diverse microbiota that acts as the principal source of microbial inoculum in crop cultivation. Overall, a plant is ensconced both on the outside and inside with a diverse assemblage of microbiota. Together, the plant genome and the genes of the microbiota that the plant harbours in different plant tissues i.e the ‘plant microbiome’, form the holobiome which is now considered as unit of selection: ‘the holobiont’. The ‘plant microbiome’ not only helps plants to remain fit but also offers critical genetic variability, hitherto, not employed in the breeding strategy by plant breeders, who traditionally have exploited the genetic variability of the host for developing high yielding or disease tolerant or drought resistant varieties. This fresh knowledge of the microbiome, particularly of the rhizosphere, offering genetic variability to plants, opens up new horizons for breeding that could usher in cultivation of next-generation crops depending less on inorganic inputs, resistant to insect pest and diseases and resilient to climatic perturbations. We surmise, from ever increasing evidences, that plants and their microbial symbionts need to be co-propagated as life-long partners in future strategies for plant breeding.

  20. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  1. Description and pilot results from a novel method for evaluating return of incidental findings from next-generation sequencing technologies.

    Goddard, Katrina A B; Whitlock, Evelyn P; Berg, Jonathan S; Williams, Marc S; Webber, Elizabeth M; Webster, Jennifer A; Lin, Jennifer S; Schrader, Kasmintan A; Campos-Outcalt, Doug; Offit, Kenneth; Feigelson, Heather Spencer; Hollombe, Celine

    2013-09-01

    The aim of this study was to develop, operationalize, and pilot test a transparent, reproducible, and evidence-informed method to determine when to report incidental findings from next-generation sequencing technologies. Using evidence-based principles, we proposed a three-stage process. Stage I "rules out" incidental findings below a minimal threshold of evidence and is evaluated using inter-rater agreement and comparison with an expert-based approach. Stage II documents criteria for clinical actionability using a standardized approach to allow experts to consistently consider and recommend whether results should be routinely reported (stage III). We used expert opinion to determine the face validity of stages II and III using three case studies. We evaluated the time and effort for stages I and II. For stage I, we assessed 99 conditions and found high inter-rater agreement (89%), and strong agreement with a separate expert-based method. Case studies for familial adenomatous polyposis, hereditary hemochromatosis, and α1-antitrypsin deficiency were all recommended for routine reporting as incidental findings. The method requires definition of clinically actionable incidental findings and provide documentation and pilot testing of a feasible method that is scalable to the whole genome.

  2. Phi photoproduction near threshold with Okubo-Zweig-Iizuka evading phi NN interactions

    William, R A

    1998-01-01

    Existing intermediate and high energy phi-photoproduction data is consistent with purely diffractive production (i.e., Pomeron exchange). However, near threshold (1.574 GeV K sup + K sup - decay angular distribution. We stress the importance of measurements with linearly polarized photons near the phi threshold to separate natural and unnatural parity exchange mechanisms. Approved and planned phi photoproduction and electroproduction experiments at Jefferson Lab will help establish the relative dynamical contributions near threshold and clarify outstanding theoretical issues related to apparent Okubo-Zweig-Iizuka violations.

  3. Interlocking-induced stiffness in stochastically microcracked materials beyond the transport percolation threshold

    Picu, R. C.; Pal, A.; Lupulescu, M. V.

    2016-04-01

    We study the mechanical behavior of two-dimensional, stochastically microcracked continua in the range of crack densities close to, and above, the transport percolation threshold. We show that these materials retain stiffness up to crack densities much larger than the transport percolation threshold due to topological interlocking of sample subdomains. Even with a linear constitutive law for the continuum, the mechanical behavior becomes nonlinear in the range of crack densities bounded by the transport and stiffness percolation thresholds. The effect is due to the fractal nature of the fragmentation process and is not linked to the roughness of individual cracks.

  4. Genotoxic thresholds, DNA repair, and susceptibility in human populations

    Jenkins, Gareth J.S.; Zair, Zoulikha; Johnson, George E.; Doak, Shareen H.

    2010-01-01

    It has been long assumed that DNA damage is induced in a linear manner with respect to the dose of a direct acting genotoxin. Thus, it is implied that direct acting genotoxic agents induce DNA damage at even the lowest of concentrations and that no 'safe' dose range exists. The linear (non-threshold) paradigm has led to the one-hit model being developed. This 'one hit' scenario can be interpreted such that a single DNA damaging event in a cell has the capability to induce a single point mutation in that cell which could (if positioned in a key growth controlling gene) lead to increased proliferation, leading ultimately to the formation of a tumour. There are many groups (including our own) who, for a decade or more, have argued, that low dose exposures to direct acting genotoxins may be tolerated by cells through homeostatic mechanisms such as DNA repair. This argument stems from the existence of evolutionary adaptive mechanisms that allow organisms to adapt to low levels of exogenous sources of genotoxins. We have been particularly interested in the genotoxic effects of known mutagens at low dose exposures in human cells and have identified for the first time, in vitro genotoxic thresholds for several mutagenic alkylating agents (Doak et al., 2007). Our working hypothesis is that DNA repair is primarily responsible for these thresholded effects at low doses by removing low levels of DNA damage but becoming saturated at higher doses. We are currently assessing the roles of base excision repair (BER) and methylguanine-DNA methyltransferase (MGMT) for roles in the identified thresholds (Doak et al., 2008). This research area is currently important as it assesses whether 'safe' exposure levels to mutagenic chemicals can exist and allows risk assessment using appropriate safety factors to define such exposure levels. Given human variation, the mechanistic basis for genotoxic thresholds (e.g. DNA repair) has to be well defined in order that susceptible individuals are

  5. Next-generation Algorithms for Assessing Infrastructure Vulnerability and Optimizing System Resilience

    Burchett, Deon L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Richard Li-Yang [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Richard, Jean-Philippe [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-05-01

    This report summarizes the work performed under the project project Next-Generation Algo- rithms for Assessing Infrastructure Vulnerability and Optimizing System Resilience. The goal of the project was to improve mathematical programming-based optimization technology for in- frastructure protection. In general, the owner of a network wishes to design a network a network that can perform well when certain transportation channels are inhibited (e.g. destroyed) by an adversary. These are typically bi-level problems where the owner designs a system, an adversary optimally attacks it, and then the owner can recover by optimally using the remaining network. This project funded three years of Deon Burchett's graduate research. Deon's graduate advisor, Professor Jean-Philippe Richard, and his Sandia advisors, Richard Chen and Cynthia Phillips, supported Deon on other funds or volunteer time. This report is, therefore. essentially a replication of the Ph.D. dissertation it funded [12] in a format required for project documentation. The thesis had some general polyhedral research. This is the study of the structure of the feasi- ble region of mathematical programs, such as integer programs. For example, an integer program optimizes a linear objective function subject to linear constraints, and (nonlinear) integrality con- straints on the variables. The feasible region without the integrality constraints is a convex polygon. Careful study of additional valid constraints can significantly improve computational performance. Here is the abstract from the dissertation: We perform a polyhedral study of a multi-commodity generalization of variable upper bound flow models. In particular, we establish some relations between facets of single- and multi- commodity models. We then introduce a new family of inequalities, which generalizes traditional flow cover inequalities to the multi-commodity context. We present encouraging numerical results. We also consider the directed

  6. Threshold behavior in electron-atom scattering

    Sadeghpour, H.R.; Greene, C.H.

    1996-01-01

    Ever since the classic work of Wannier in 1953, the process of treating two threshold electrons in the continuum of a positively charged ion has been an active field of study. The authors have developed a treatment motivated by the physics below the double ionization threshold. By modeling the double ionization as a series of Landau-Zener transitions, they obtain an analytical formulation of the absolute threshold probability which has a leading power law behavior, akin to Wannier's law. Some of the noteworthy aspects of this derivation are that the derivation can be conveniently continued below threshold giving rise to a open-quotes cuspclose quotes at threshold, and that on both sides of the threshold, absolute values of the cross sections are obtained

  7. A numerical study of threshold states

    Ata, M.S.; Grama, C.; Grama, N.; Hategan, C.

    1979-01-01

    There are some experimental evidences of charged particle threshold states. On the statistical background of levels, some simple structures were observed in excitation spectrum. They occur near the coulombian threshold and have a large reduced width for the decay in the threshold channel. These states were identified as charged cluster threshold states. Such threshold states were observed in sup(15,16,17,18)O, sup(18,19)F, sup(19,20)Ne, sup(24)Mg, sup(32)S. The types of clusters involved were d, t, 3 He, α and even 12 C. They were observed in heavy-ions transfer reactions in the residual nucleus as strong excited levels. The charged particle threshold states occur as simple structures at high excitation energy. They could be interesting both from nuclear structure as well as nuclear reaction mechanism point of view. They could be excited as simple structures both in compound and residual nucleus. (author)

  8. Alternative method for determining anaerobic threshold in rowers

    Giovani dos Santos Cunha

    2008-12-01

    Full Text Available In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detectventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposeddetermining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what valueof RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic thresholdand whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. Thesample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer withconcurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobicthreshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variableswere classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2(58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic thresholdthey reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions- the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of0.99, however, RER should exhibit a non-linear increase above this figure.

  9. Automating linear accelerator quality assurance.

    Eckhause, Tobias; Al-Hallaq, Hania; Ritter, Timothy; DeMarco, John; Farrey, Karl; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Perez, Mario; Park, SungYong; Booth, Jeremy T; Thorwarth, Ryan; Moran, Jean M

    2015-10-01

    The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The standard deviation in MLC

  10. Metallic Nanocomposites as Next-Generation Thermal Interface Materials: Preprint

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Charles C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Nagabandi, Nirup [Texas A& M University; Oh, Jun K. [Texas A& M University; Akbulut, Mustafa [Texas A& M University; Yegin, Cengiz [Texas A& M University

    2017-09-14

    nanocomposite is 11 ppm/K, which lies between the CTEs of aluminum (22 ppm/K) and silicon (3 ppm/K), which are common heat sink and heat source materials, respectively. The nanocomposite can also be deposited directly on to heat sink which will simplify the packaging processes by removing one possible element to assemble. These unique properties and ease of assembly makes the nanocomposite a promising next-generation TIM.

  11. A Next-Generation Automated Holdup Measurement System (HMS-5)

    Gariazzo, Claudio Andres; Smith, Steven E.; Solodov, Alexander A

    2007-01-01

    hardware such as lanthanum halide detectors and digital processing multichannel analyzers will be incorporated into the new HMS-5 system to accommodate the evolving realm of SNM detection and quantification. HMS-5 is the natural progression from the previous incantations of automated special nuclear material holdup measurement systems for process facilities. ORNL is leading this next-generation system with assistance from its foreign partners and past experiences of its Safeguards Laboratory staff

  12. Metallic Nanocomposites as Next-Generation Thermal Interface Materials

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Charles C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Nagabandi, Nirup [Texas A& M University; Oh, Jun Kyun [Texas A& M University; Akbulut, Mustafa [Texas A& M University; Yegin, Cengiz [Texas A& M University

    2017-07-27

    nanocomposite is 11 ppm/K, which lies between the CTEs of aluminum (22 ppm/K) and silicon (3 ppm/K), which are common heat sink and heat source materials, respectively. The nanocomposite can also be deposited directly on to heat sink which will simplify the packaging processes by removing one possible element to assemble. These unique properties and ease of assembly makes the nanocomposite a promising next-generation TIM.

  13. Iran: the next nuclear threshold state?

    Maurer, Christopher L.

    2014-01-01

    Approved for public release; distribution is unlimited A nuclear threshold state is one that could quickly operationalize its peaceful nuclear program into one capable of producing a nuclear weapon. This thesis compares two known threshold states, Japan and Brazil, with Iran to determine if the Islamic Republic could also be labeled a threshold state. Furthermore, it highlights the implications such a status could have on U.S. nonproliferation policy. Although Iran's nuclear program is mir...

  14. Dynamical thresholds for complete fusion

    Davies, K.T.R.; Sierk, A.J.; Nix, J.R.

    1983-01-01

    It is our purpose here to study the effect of nuclear dissipation and shape parametrization on dynamical thresholds for compound-nucleus formation in symmetric heavy-ion reactions. This is done by solving numerically classical equations of motion for head-on collisions to determine whether the dynamical trajectory in a multidimensional deformation space passes inside the fission saddle point and forms a compound nucleus, or whether it passes outside the fission saddle point and reseparates in a fast-fission or deep-inelastic reaction. Specifying the nuclear shape in terms of smoothly joined portions of three quadratic surfaces of revolution, we take into account three symmetric deformation coordinates. However, in some cases we reduce the number of coordinates to two by requiring the ends of the fusing system to be spherical in shape. The nuclear potential energy of deformation is determined in terms of a Coulomb energy and a double volume energy of a Yukawa-plus-exponential folding function. The collective kinetic energy is calculated for incompressible, nearly irrotational flow by means of the Werner-Wheeler approximation. Four possibilities are studied for the transfer of collective kinetic energy into internal single-particle excitation energy: zero dissipation, ordinary two body viscosity, one-body wall-formula dissipation, and one-body wall-and-window dissipation

  15. Heterogeneous next-generation wireless network interference model-and its applications

    Mahmood, Nurul Huda

    2014-04-01

    Next-generation wireless systems facilitating better utilisation of the scarce radio spectrum have emerged as a response to inefficient and rigid spectrum assignment policies. These are comprised of intelligent radio nodes that opportunistically operate in the radio spectrum of existing primary systems, yet unwanted interference at the primary receivers is unavoidable. In order to design efficient next-generation systems and to minimise the adverse effect of their interference, it is necessary to realise how the resulting interference impacts the performance of the primary systems. In this work, a generalised framework for the interference analysis of such a next-generation system is presented where the nextgeneration transmitters may transmit randomly with different transmit powers. The analysis is built around a model developed for the statistical representation of the interference at the primary receivers, which is then used to evaluate various performance measures of the primary system. Applications of the derived interference model in designing the next-generation network system parameters are also demonstrated. Such approach provides a unified and generalised framework, the use of which allows a wide range of performance metrics can be evaluated. Findings of the analytical performance analyses are confirmed through extensive computer-based Monte-Carlo simulations. © 2012 John Wiley & Sons, Ltd.

  16. A Window Into Clinical Next-Generation Sequencing-Based Oncology Testing Practices.

    Nagarajan, Rakesh; Bartley, Angela N; Bridge, Julia A; Jennings, Lawrence J; Kamel-Reid, Suzanne; Kim, Annette; Lazar, Alexander J; Lindeman, Neal I; Moncur, Joel; Rai, Alex J; Routbort, Mark J; Vasalos, Patricia; Merker, Jason D

    2017-12-01

    - Detection of acquired variants in cancer is a paradigm of precision medicine, yet little has been reported about clinical laboratory practices across a broad range of laboratories. - To use College of American Pathologists proficiency testing survey results to report on the results from surveys on next-generation sequencing-based oncology testing practices. - College of American Pathologists proficiency testing survey results from more than 250 laboratories currently performing molecular oncology testing were used to determine laboratory trends in next-generation sequencing-based oncology testing. - These presented data provide key information about the number of laboratories that currently offer or are planning to offer next-generation sequencing-based oncology testing. Furthermore, we present data from 60 laboratories performing next-generation sequencing-based oncology testing regarding specimen requirements and assay characteristics. The findings indicate that most laboratories are performing tumor-only targeted sequencing to detect single-nucleotide variants and small insertions and deletions, using desktop sequencers and predesigned commercial kits. Despite these trends, a diversity of approaches to testing exists. - This information should be useful to further inform a variety of topics, including national discussions involving clinical laboratory quality systems, regulation and oversight of next-generation sequencing-based oncology testing, and precision oncology efforts in a data-driven manner.

  17. A linear programming manual

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  18. Linear shaped charge

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  19. Classifying Linear Canonical Relations

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  20. Linear-Algebra Programs

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  1. Threshold responses of Amazonian stream fishes to timing and extent of deforestation.

    Brejão, Gabriel L; Hoeinghaus, David J; Pérez-Mayorga, María Angélica; Ferraz, Silvio F B; Casatti, Lilian

    2017-12-06

    Deforestation is a primary driver of biodiversity change through habitat loss and fragmentation. Stream biodiversity may not respond to deforestation in a simple linear relationship. Rather, threshold responses to extent and timing of deforestation may occur. Identification of critical deforestation thresholds is needed for effective conservation and management. We tested for threshold responses of fish species and functional groups to degree of watershed and riparian zone deforestation and time since impact in 75 streams in the western Brazilian Amazon. We used remote sensing to assess deforestation from 1984 to 2011. Fish assemblages were sampled with seines and dip nets in a standardized manner. Fish species (n = 84) were classified into 20 functional groups based on ecomorphological traits associated with habitat use, feeding, and locomotion. Threshold responses were quantified using threshold indicator taxa analysis. Negative threshold responses to deforestation were common and consistently occurred at very low levels of deforestation (70% deforestation and >10 years after impact. Findings were similar at the community level for both taxonomic and functional analyses. Because most negative threshold responses occurred at low levels of deforestation and soon after impact, even minimal change is expected to negatively affect biodiversity. Delayed positive threshold responses to extreme deforestation by a few species do not offset the loss of sensitive taxa and likely contribute to biotic homogenization. © 2017 Society for Conservation Biology.

  2. Threshold Evaluation of Emergency Risk Communication for Health Risks Related to Hazardous Ambient Temperature.

    Liu, Yang; Hoppe, Brenda O; Convertino, Matteo

    2018-04-10

    Emergency risk communication (ERC) programs that activate when the ambient temperature is expected to cross certain extreme thresholds are widely used to manage relevant public health risks. In practice, however, the effectiveness of these thresholds has rarely been examined. The goal of this study is to test if the activation criteria based on extreme temperature thresholds, both cold and heat, capture elevated health risks for all-cause and cause-specific mortality and morbidity in the Minneapolis-St. Paul Metropolitan Area. A distributed lag nonlinear model (DLNM) combined with a quasi-Poisson generalized linear model is used to derive the exposure-response functions between daily maximum heat index and mortality (1998-2014) and morbidity (emergency department visits; 2007-2014). Specific causes considered include cardiovascular, respiratory, renal diseases, and diabetes. Six extreme temperature thresholds, corresponding to 1st-3rd and 97th-99th percentiles of local exposure history, are examined. All six extreme temperature thresholds capture significantly increased relative risks for all-cause mortality and morbidity. However, the cause-specific analyses reveal heterogeneity. Extreme cold thresholds capture increased mortality and morbidity risks for cardiovascular and respiratory diseases and extreme heat thresholds for renal disease. Percentile-based extreme temperature thresholds are appropriate for initiating ERC targeting the general population. Tailoring ERC by specific causes may protect some but not all individuals with health conditions exacerbated by hazardous ambient temperature exposure. © 2018 Society for Risk Analysis.

  3. Detecting fatigue thresholds from electromyographic signals: A systematic review on approaches and methodologies.

    Ertl, Peter; Kruse, Annika; Tilp, Markus

    2016-10-01

    The aim of the current paper was to systematically review the relevant existing electromyographic threshold concepts within the literature. The electronic databases MEDLINE and SCOPUS were screened for papers published between January 1980 and April 2015 including the keywords: neuromuscular fatigue threshold, anaerobic threshold, electromyographic threshold, muscular fatigue, aerobic-anaerobictransition, ventilatory threshold, exercise testing, and cycle-ergometer. 32 articles were assessed with regard to their electromyographic methodologies, description of results, statistical analysis and test protocols. Only one article was of very good quality. 21 were of good quality and two articles were of very low quality. The review process revealed that: (i) there is consistent evidence of one or two non-linear increases of EMG that might reflect the additional recruitment of motor units (MU) or different fiber types during fatiguing cycle ergometer exercise, (ii) most studies reported no statistically significant difference between electromyographic and metabolic thresholds, (iii) one minute protocols with increments between 10 and 25W appear most appropriate to detect muscular threshold, (iv) threshold detection from the vastus medialis, vastus lateralis, and rectus femoris is recommended, and (v) there is a great variety in study protocols, measurement techniques, and data processing. Therefore, we recommend further research and standardization in the detection of EMGTs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A study on the temperature dependence of the threshold switching characteristics of Ge2Sb2Te5

    Lee, Suyoun; Jeong, Doo Seok; Jeong, Jeung-hyun; Zhe, Wu; Park, Young-Wook; Ahn, Hyung-Woo; Cheong, Byung-ki

    2010-01-01

    We investigated the temperature dependence of the threshold switching characteristics of a memory-type chalcogenide material, Ge2Sb2Te5. We found that the threshold voltage (Vth) decreased linearly with temperature, implying the existence of a critical conductivity of Ge2Sb2Te5 for its threshold switching. In addition, we investigated the effect of bias voltage and temperature on the delay time (tdel) of the threshold switching of Ge2Sb2Te5 and described the measured relationship by an analytic expression which we derived based on a physical model where thermally activated hopping is a dominant transport mechanism in the material.

  5. Log canonical thresholds of smooth Fano threefolds

    Cheltsov, Ivan A; Shramov, Konstantin A

    2008-01-01

    The complex singularity exponent is a local invariant of a holomorphic function determined by the integrability of fractional powers of the function. The log canonical thresholds of effective Q-divisors on normal algebraic varieties are algebraic counterparts of complex singularity exponents. For a Fano variety, these invariants have global analogues. In the former case, it is the so-called α-invariant of Tian; in the latter case, it is the global log canonical threshold of the Fano variety, which is the infimum of log canonical thresholds of all effective Q-divisors numerically equivalent to the anticanonical divisor. An appendix to this paper contains a proof that the global log canonical threshold of a smooth Fano variety coincides with its α-invariant of Tian. The purpose of the paper is to compute the global log canonical thresholds of smooth Fano threefolds (altogether, there are 105 deformation families of such threefolds). The global log canonical thresholds are computed for every smooth threefold in 64 deformation families, and the global log canonical thresholds are computed for a general threefold in 20 deformation families. Some bounds for the global log canonical thresholds are computed for 14 deformation families. Appendix A is due to J.-P. Demailly.

  6. Thresholding magnetic resonance images of human brain

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  7. Time-efficient multidimensional threshold tracking method

    Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten

    2015-01-01

    Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...

  8. 40 CFR 68.115 - Threshold determination.

    2010-07-01

    ... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Regulated Substances for Accidental Release Prevention... process exceeds the threshold. (b) For the purposes of determining whether more than a threshold quantity... portion of the process is less than 10 millimeters of mercury (mm Hg), the amount of the substance in the...

  9. Applying Threshold Concepts to Finance Education

    Hoadley, Susan; Wood, Leigh N.; Tickle, Leonie; Kyng, Tim

    2016-01-01

    Purpose: The purpose of this paper is to investigate and identify threshold concepts that are the essential conceptual content of finance programmes. Design/Methodology/Approach: Conducted in three stages with finance academics and students, the study uses threshold concepts as both a theoretical framework and a research methodology. Findings: The…

  10. Chronic Meningitis Investigated via Metagenomic Next-Generation Sequencing

    O’Donovan, Brian D.; Gelfand, Jeffrey M.; Sample, Hannah A.; Chow, Felicia C.; Betjemann, John P.; Shah, Maulik P.; Richie, Megan B.; Gorman, Mark P.; Hajj-Ali, Rula A.; Calabrese, Leonard H.; Zorn, Kelsey C.; Chow, Eric D.; Greenlee, John E.; Blum, Jonathan H.; Green, Gary; Khan, Lillian M.; Banerji, Debarko; Langelier, Charles; Bryson-Cahn, Chloe; Harrington, Whitney; Lingappa, Jairam R.; Shanbhag, Niraj M.; Green, Ari J.; Brew, Bruce J.; Soldatos, Ariane; Strnad, Luke; Doernberg, Sarah B.; Jay, Cheryl A.; Douglas, Vanja; Josephson, S. Andrew; DeRisi, Joseph L.

    2018-01-01

    Importance Identifying infectious causes of subacute or chronic meningitis can be challenging. Enhanced, unbiased diagnostic approaches are needed. Objective To present a case series of patients with diagnostically challenging subacute or chronic meningitis using metagenomic next-generation sequencing (mNGS) of cerebrospinal fluid (CSF) supported by a statistical framework generated from mNGS of control samples from the environment and from patients who were noninfectious. Design, Setting, and Participants In this case series, mNGS data obtained from the CSF of 94 patients with noninfectious neuroinflammatory disorders and from 24 water and reagent control samples were used to develop and implement a weighted scoring metric based on z scores at the species and genus levels for both nucleotide and protein alignments to prioritize and rank the mNGS results. Total RNA was extracted for mNGS from the CSF of 7 participants with subacute or chronic meningitis who were recruited between September 2013 and March 2017 as part of a multicenter study of mNGS pathogen discovery among patients with suspected neuroinflammatory conditions. The neurologic infections identified by mNGS in these 7 participants represented a diverse array of pathogens. The patients were referred from the University of California, San Francisco Medical Center (n = 2), Zuckerberg San Francisco General Hospital and Trauma Center (n = 2), Cleveland Clinic (n = 1), University of Washington (n = 1), and Kaiser Permanente (n = 1). A weighted z score was used to filter out environmental contaminants and facilitate efficient data triage and analysis. Main Outcomes and Measures Pathogens identified by mNGS and the ability of a statistical model to prioritize, rank, and simplify mNGS results. Results The 7 participants ranged in age from 10 to 55 years, and 3 (43%) were female. A parasitic worm (Taenia solium, in 2 participants), a virus (HIV-1), and 4 fungi (Cryptococcus neoformans

  11. Summary of DOE threshold limits efforts

    Wickham, L.E.; Smith, C.F.; Cohen, J.J.

    1987-01-01

    The Department of Energy (DOE) has been developing the concept of threshold quantities for use in determining which waste materials may be disposed of as nonradioactive waste in DOE sanitary landfills. Waste above a threshold level could be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. After extensive review of a draft threshold guidance document in 1985, a second draft threshold background document was produced in March 1986. The second draft included a preliminary cost-benefit analysis and quality assurance considerations. The review of the second draft has been completed. Final changes to be incorporated include an in-depth cost-benefit analysis of two example sites and recommendations of how to further pursue (i.e. employ) the concept of threshold quantities within the DOE. 3 references

  12. Compliance uncertainty of diameter characteristic in the next-generation geometrical product specifications and verification

    Lu, W L; Jiang, X; Liu, X J; Xu, Z G

    2008-01-01

    Compliance uncertainty is one of the most important elements in the next-generation geometrical product specifications and verification (GPS). It consists of specification uncertainty, method uncertainty and implementation uncertainty, which are three of the four fundamental uncertainties in the next-generation GPS. This paper analyzes the key factors that influence compliance uncertainty and then proposes a procedure to manage the compliance uncertainty. A general model on evaluation of compliance uncertainty has been devised and a specific formula for diameter characteristic has been derived based on this general model. The case study was conducted and it revealed that the completeness of currently dominant diameter characteristic specification needs to be improved

  13. A Threshold Continuum for Aeolian Sand Transport

    Swann, C.; Ewing, R. C.; Sherman, D. J.

    2015-12-01

    The threshold of motion for aeolian sand transport marks the initial entrainment of sand particles by the force of the wind. This is typically defined and modeled as a singular wind speed for a given grain size and is based on field and laboratory experimental data. However, the definition of threshold varies significantly between these empirical models, largely because the definition is based on visual-observations of initial grain movement. For example, in his seminal experiments, Bagnold defined threshold of motion when he observed that 100% of the bed was in motion. Others have used 50% and lesser values. Differences in threshold models, in turn, result is large errors in predicting the fluxes associated with sand and dust transport. Here we use a wind tunnel and novel sediment trap to capture the fractions of sand in creep, reptation and saltation at Earth and Mars pressures and show that the threshold of motion for aeolian sand transport is best defined as a continuum in which grains progress through stages defined by the proportion of grains in creep and saltation. We propose the use of scale dependent thresholds modeled by distinct probability distribution functions that differentiate the threshold based on micro to macro scale applications. For example, a geologic timescale application corresponds to a threshold when 100% of the bed in motion whereas a sub-second application corresponds to a threshold when a single particle is set in motion. We provide quantitative measurements (number and mode of particle movement) corresponding to visual observations, percent of bed in motion and degrees of transport intermittency for Earth and Mars. Understanding transport as a continuum provides a basis for revaluating sand transport thresholds on Earth, Mars and Titan.

  14. Non linear system become linear system

    Petre Bucur

    2007-01-01

    Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.

  15. Linear motor coil assembly and linear motor

    2009-01-01

    An ironless linear motor (5) comprising a magnet track (53) and a coil assembly (50) operating in cooperation with said magnet track (53) and having a plurality of concentrated multi-turn coils (31 a-f, 41 a-d, 51 a-k), wherein the end windings (31E) of the coils (31 a-f, 41 a-e) are substantially

  16. Introducing linear functions: an alternative statistical approach

    Nolan, Caroline; Herbert, Sandra

    2015-12-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.

  17. Linear collider: a preview

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.

  18. Basic linear algebra

    Blyth, T S

    2002-01-01

    Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...

  19. Linear collider: a preview

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center

  20. Matrices and linear transformations

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  1. Efficient Non Linear Loudspeakers

    Petersen, Bo R.; Agerkvist, Finn T.

    2006-01-01

    Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....

  2. Analysis of ecological thresholds in a temperate forest undergoing dieback.

    Philip Martin

    Full Text Available Positive feedbacks in drivers of degradation can cause threshold responses in natural ecosystems. Though threshold responses have received much attention in studies of aquatic ecosystems, they have been neglected in terrestrial systems, such as forests, where the long time-scales required for monitoring have impeded research. In this study we explored the role of positive feedbacks in a temperate forest that has been monitored for 50 years and is undergoing dieback, largely as a result of death of the canopy dominant species (Fagus sylvatica, beech. Statistical analyses showed strong non-linear losses in basal area for some plots, while others showed relatively gradual change. Beech seedling density was positively related to canopy openness, but a similar relationship was not observed for saplings, suggesting a feedback whereby mortality in areas with high canopy openness was elevated. We combined this observation with empirical data on size- and growth-mediated mortality of trees to produce an individual-based model of forest dynamics. We used this model to simulate changes in the structure of the forest over 100 years under scenarios with different juvenile and mature mortality probabilities, as well as a positive feedback between seedling and mature tree mortality. This model produced declines in forest basal area when critical juvenile and mature mortality probabilities were exceeded. Feedbacks in juvenile mortality caused a greater reduction in basal area relative to scenarios with no feedback. Non-linear, concave declines of basal area occurred only when mature tree mortality was 3-5 times higher than rates observed in the field. Our results indicate that the longevity of trees may help to buffer forests against environmental change and that the maintenance of old, large trees may aid the resilience of forest stands. In addition, our work suggests that dieback of forests may be avoidable providing pressures on mature and juvenile trees do

  3. Hyper-arousal decreases human visual thresholds.

    Adam J Woods

    Full Text Available Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0-2° C water, a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1 and contrast thresholds (Experiment 2 were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.

  4. Linear models with R

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  5. Linear integrated circuits

    Carr, Joseph

    1996-01-01

    The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa

  6. Fault tolerant linear actuator

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  7. Superconducting linear accelerator cryostat

    Ben-Zvi, I.; Elkonin, B.V.; Sokolowski, J.S.

    1984-01-01

    A large vertical cryostat for a superconducting linear accelerator using quarter wave resonators has been developed. The essential technical details, operational experience and performance are described. (author)

  8. Effects of fatigue on motor unit firing rate versus recruitment threshold relationships.

    Stock, Matt S; Beck, Travis W; Defreitas, Jason M

    2012-01-01

    The purpose of this study was to examine the influence of fatigue on the average firing rate versus recruitment threshold relationships for the vastus lateralis (VL) and vastus medialis. Nineteen subjects performed ten maximum voluntary contractions of the dominant leg extensors. Before and after this fatiguing protocol, the subjects performed a trapezoid isometric muscle action of the leg extensors, and bipolar surface electromyographic signals were detected from both muscles. These signals were then decomposed into individual motor unit action potential trains. For each subject and muscle, the relationship between average firing rate and recruitment threshold was examined using linear regression analyses. For the VL, the linear slope coefficients and y-intercepts for these relationships increased and decreased, respectively, after fatigue. For both muscles, many of the motor units decreased their firing rates. With fatigue, recruitment of higher threshold motor units resulted in an increase in slope for the VL. Copyright © 2011 Wiley Periodicals, Inc.

  9. Investigation of excimer laser ablation threshold of polymers using a microphone

    Krueger, Joerg; Niino, Hiroyuki; Yabe, Akira

    2002-09-30

    KrF excimer laser ablation of polyethylene terephthalate (PET), polyimide (PI) and polycarbonate (PC) in air was studied by an in situ monitoring technique using a microphone. The microphone signal generated by a short acoustic pulse represented the etch rate of laser ablation depending on the laser fluence, i.e., the ablation 'strength'. From a linear relationship between the microphone output voltage and the laser fluence, the single-pulse ablation thresholds were found to be 30 mJ cm{sup -2} for PET, 37 mJ cm{sup -2} for PI and 51 mJ cm{sup -2} for PC (20-pulses threshold). The ablation thresholds of PET and PI were not influenced by the number of pulses per spot, while PC showed an incubation phenomenon. A microphone technique provides a simple method to determine the excimer laser ablation threshold of polymer films.

  10. Justifying threshold voltage definition for undoped body transistors through 'crossover point' concept

    Baruah, Ratul Kumar; Mahapatra, Santanu

    2009-01-01

    Two different definitions, one is potential based and the other is charge based, are used in the literatures to define the threshold voltage of undoped body symmetric double gate transistors. This paper, by introducing a novel concept of crossover point, proves that the charge based definition is more accurate than the potential based definition. It is shown that for a given channel length the potential based definition predicts anomalous change in threshold voltage with body thickness variation while the charge based definition results in monotonous change. The threshold voltage is then extracted from drain current versus gate voltage characteristics using linear extrapolation, transconductance and match-point methods. In all the three cases it is found that trend of threshold voltage variation support the charge based definition.

  11. NEUTRON SPECTRUM MEASUREMENTS USING MULTIPLE THRESHOLD DETECTORS

    Gerken, William W.; Duffey, Dick

    1963-11-15

    From American Nuclear Society Meeting, New York, Nov. 1963. The use of threshold detectors, which simultaneously undergo reactions with thermal neutrons and two or more fast neutron threshold reactions, was applied to measurements of the neutron spectrum in a reactor. A number of different materials were irradiated to determine the most practical ones for use as multiple threshold detectors. These results, as well as counting techniques and corrections, are presented. Some materials used include aluminum, alloys of Al -Ni, aluminum-- nickel oxides, and magesium orthophosphates. (auth)

  12. Linearity enigmas in ecology

    Patten, B.C.

    1983-04-01

    Two issues concerning linearity or nonlinearity of natural systems are considered. Each is related to one of the two alternative defining properties of linear systems, superposition and decomposition. Superposition exists when a linear combination of inputs to a system results in the same linear combination of outputs that individually correspond to the original inputs. To demonstrate this property it is necessary that all initial states and inputs of the system which impinge on the output in question be included in the linear combination manipulation. As this is difficult or impossible to do with real systems of any complexity, nature appears nonlinear even though it may be linear. A linear system that displays nonlinear behavior for this reason is termed pseudononlinear. The decomposition property exists when the dynamic response of a system can be partitioned into an input-free portion due to state plus a state-free portion due to input. This is a characteristic of all linear systems, but not of nonlinear systems. Without the decomposition property, it is not possible to distinguish which portions of a system's behavior are due to innate characteristics (self) vs. outside conditions (environment), which is an important class of questions in biology and ecology. Some philosophical aspects of these findings are then considered. It is suggested that those ecologists who hold to the view that organisms and their environments are separate entities are in effect embracing a linear view of nature, even though their belief systems and mathematical models tend to be nonlinear. On the other hand, those who consider that organism-environment complex forms a single inseparable unit are implictly involved in non-linear thought, which may be in conflict with the linear modes and models that some of them use. The need to rectify these ambivalences on the part of both groups is indicated.

  13. A two-stage flow-based intrusion detection model for next-generation networks.

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  14. Balancing Performance and Sustainability in Next-Generation PMR Technologies for OMC Structures

    2016-05-26

    John J. La Scala , Benjamin G. Harvey, Giuseppe R. Palmese, William S. Eck, Joshua M. Sadler, Santosh K. Yadav 5d. PROJECT NUMBER 5e. TASK NUMBER...PERFORMANCE AND SUSTAINABILITY IN NEXT-GENERATION PMR TECHNOLOGIES FOR OMC STRUCTURES Gregory R. Yandek,1 Jason T. Lamb,2 John J. La Scala ,3 Benjamin G

  15. The construction of next-generation matrices for compartmental epidemic models

    Diekmann, O.|info:eu-repo/dai/nl/071896856; Heesterbeek, J.A.P.|info:eu-repo/dai/nl/073321427; Roberts, M.G.

    2010-01-01

    The basic reproduction number R0 is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of R0 where finitely many different categories of individuals are recognized. We clear up confusion

  16. Applications and Case Studies of the Next-Generation Sequencing Technologies in Food, Nutrition and Agriculture.

    Next-generation sequencing technologies are able to produce high-throughput short sequence reads in a cost-effective fashion. The emergence of these technologies has not only facilitated genome sequencing but also changed the landscape of life sciences. Here I survey their major applications ranging...

  17. Physical-Layer Design for Next-Generation Cellular Wireless Systems

    Foschini, Gerard J.; Huang, Howard C.; Mullender, Sape J.; Venkatesan, Sivarama; Viswanathan, Harish

    The conventional cellular architecture will remain an integral part of nextgeneration wireless systems, providing high-speed packet data services directly to mobile users and also backhaul service for local area networks. In this paper, we present several proposals addressing the challenges

  18. Next-generation sequencing in NSCLC and melanoma patients : A cost and budget impact analysis

    Van Amerongen, Rosa A.; Retèl, Valesca P.; Coupé, Veerle M.H.; Nederlof, Petra M.; Vogel, Maartje J.; Van Harten, Wim H.

    2016-01-01

    Next-generation sequencing (NGS) has reached the molecular diagnostic laboratories. Although the NGS technology aims to improve the effectiveness of therapies by selecting the most promising therapy, concerns are that NGS testing is expensive and that the 'benefits' are not yet in relation to these

  19. In vitro bactericidal activity of aminoglycosides, including the next-generation drug plazomicin, against Brucella spp.

    Plazomicin is a next-generation aminoglycoside with a potentially improved safety profile compared to other aminoglycosides. This study assessed plazomicin MICs and MBCs in four Brucella spp. reference strains. Like other aminoglycosides and aminocyclitols, plazomicin MBC values equaled MIC values ...

  20. New long-range speed record with next-generation internet

    2003-01-01

    "Scientists at CERN and the California Institute of Technology have set a new Internet2 land speed record using the next-generation Internet protocol IPv6. The team sustained a single stream Transfer Control Protocol (TCP) rate of 983 megabits per second for more than one hour between CERN and Chicago, a distance of more than 7,000 kilometres" (1 page).

  1. Bringing Next-Generation Sequencing into the Classroom through a Comparison of Molecular Biology Techniques

    Bowling, Bethany; Zimmer, Erin; Pyatt, Robert E.

    2014-01-01

    Although the development of next-generation (NextGen) sequencing technologies has revolutionized genomic research and medicine, the incorporation of these topics into the classroom is challenging, given an implied high degree of technical complexity. We developed an easy-to-implement, interactive classroom activity investigating the similarities…

  2. Real-Time Optimization and Control of Next-Generation Distribution

    -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution developing a system-theoretic distribution network management framework that unifies real-time voltage and Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next

  3. Unbundling in Current Broadband and Next-Generation Ultra-Broadband Access Networks

    Gaudino, Roberto; Giuliano, Romeo; Mazzenga, Franco; Valcarenghi, Luca; Vatalaro, Francesco

    2014-05-01

    This article overviews the methods that are currently under investigation for implementing multi-operator open-access/shared-access techniques in next-generation access ultra-broadband architectures, starting from the traditional "unbundling-of-the-local-loop" techniques implemented in legacy twisted-pair digital subscriber line access networks. A straightforward replication of these copper-based unbundling-of-the-local-loop techniques is usually not feasible on next-generation access networks, including fiber-to-the-home point-to-multipoint passive optical networks. To investigate this issue, the article first gives a concise description of traditional copper-based unbundling-of-the-local-loop solutions, then focalizes on both next-generation access hybrid fiber-copper digital subscriber line fiber-to-the-cabinet scenarios and on fiber to the home by accounting for the mix of regulatory and technological reasons driving the next-generation access migration path, focusing mostly on the European situation.

  4. Estimation of allele frequency and association mapping using next-generation sequencing data

    Kim, Su Yeon; Lohmueller, Kirk E; Albrechtsen, Anders

    2011-01-01

    Estimation of allele frequency is of fundamental importance in population genetic analyses and in association mapping. In most studies using next-generation sequencing, a cost effective approach is to use medium or low-coverage data (e.g., frequency estimation...

  5. Next-generation sequencing for endocrine cancers: Recent advances and challenges.

    Suresh, Padmanaban S; Venkatesh, Thejaswini; Tsutsumi, Rie; Shetty, Abhishek

    2017-05-01

    Contemporary molecular biology research tools have enriched numerous areas of biomedical research that address challenging diseases, including endocrine cancers (pituitary, thyroid, parathyroid, adrenal, testicular, ovarian, and neuroendocrine cancers). These tools have placed several intriguing clues before the scientific community. Endocrine cancers pose a major challenge in health care and research despite considerable attempts by researchers to understand their etiology. Microarray analyses have provided gene signatures from many cells, tissues, and organs that can differentiate healthy states from diseased ones, and even show patterns that correlate with stages of a disease. Microarray data can also elucidate the responses of endocrine tumors to therapeutic treatments. The rapid progress in next-generation sequencing methods has overcome many of the initial challenges of these technologies, and their advantages over microarray techniques have enabled them to emerge as valuable aids for clinical research applications (prognosis, identification of drug targets, etc.). A comprehensive review describing the recent advances in next-generation sequencing methods and their application in the evaluation of endocrine and endocrine-related cancers is lacking. The main purpose of this review is to illustrate the concepts that collectively constitute our current view of the possibilities offered by next-generation sequencing technological platforms, challenges to relevant applications, and perspectives on the future of clinical genetic testing of patients with endocrine tumors. We focus on recent discoveries in the use of next-generation sequencing methods for clinical diagnosis of endocrine tumors in patients and conclude with a discussion on persisting challenges and future objectives.

  6. PheoSeq : A Targeted Next-Generation Sequencing Assay for Pheochromocytoma and Paraganglioma Diagnostics

    Currás-Freixes, Maria; Piñeiro-Yañez, Elena; Montero-Conde, Cristina; Apellániz-Ruiz, María; Calsina, Bruna; Mancikova, Veronika; Remacha, Laura; Richter, Susan; Ercolino, Tonino; Rogowski-Lehmann, Natalie; Deutschbein, Timo; Calatayud, María; Guadalix, Sonsoles; Álvarez-Escolá, Cristina; Lamas, Cristina; Aller, Javier; Sastre-Marcos, Julia; Lázaro, Conxi; Galofré, Juan C.; Patiño-García, Ana; Meoro-Avilés, Amparo; Balmaña-Gelpi, Judith; De Miguel-Novoa, Paz; Balbín, Milagros; Matías-Guiu, Xavier; Letón, Rocío; Inglada-Pérez, Lucía; Torres-Pérez, Rafael; Roldán-Romero, Juan M.; Rodríguez-Antona, Cristina; Fliedner, Stephanie M J; Opocher, Giuseppe; Pacak, Karel; Korpershoek, Esther; de Krijger, Ronald R.; Vroonen, Laurent; Mannelli, Massimo; Fassnacht, Martin; Beuschlein, Felix; Eisenhofer, Graeme; Cascón, Alberto; Al-Shahrour, Fátima; Robledo, Mercedes

    2017-01-01

    Genetic diagnosis is recommended for all pheochromocytoma and paraganglioma (PPGL) cases, as driver mutations are identified in approximately 80% of the cases. As the list of related genes expands, genetic diagnosis becomes more time-consuming, and targeted next-generation sequencing (NGS) has

  7. Cisco Networking Academy: Next-Generation Assessments and Their Implications for K-12 Education

    Liu, Meredith

    2014-01-01

    To illuminate the possibilities for next-generation assessments in K-12 schools, this case study profiles the Cisco Networking Academy, which creates comprehensive online training curriculum to teach networking skills. Since 1997, the Cisco Networking Academy has served more than five million high school and college students and now delivers…

  8. Precision Controlled Carbon Materials for Next-Generation Optoelectronic and Photonic Devices

    2018-01-08

    engineer next-generation carbon-based optoelectronic and photonic devices with superior performance and capabilities. These devices include carbon...electronics; (4) nanostructured graphene plasmonics; and (5) polymer-nanotube conjugate chemistry . (1) Semiconducting carbon nanotube-based...applications (In Preparation, 2018). (5) Polymer-nanotube conjugate chemistry Conjugated polymers can be exploited as agents for selectively wrapping and

  9. Linear colliders - prospects 1985

    Rees, J.

    1985-06-01

    We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs

  10. The SLAC linear collider

    Richter, B.

    1985-01-01

    A report is given on the goals and progress of the SLAC Linear Collider. The author discusses the status of the machine and the detectors and give an overview of the physics which can be done at this new facility. He also gives some ideas on how (and why) large linear colliders of the future should be built

  11. Linear Programming (LP)

    Rogner, H.H.

    1989-01-01

    The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig

  12. Racetrack linear accelerators

    Rowe, C.H.; Wilton, M.S. de.

    1979-01-01

    An improved recirculating electron beam linear accelerator of the racetrack type is described. The system comprises a beam path of four straight legs with four Pretzel bending magnets at the end of each leg to direct the beam into the next leg of the beam path. At least one of the beam path legs includes a linear accelerator. (UK)

  13. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  14. Estimation of failure probabilities of linear dynamic systems by ...

    An iterative method for estimating the failure probability for certain time-variant reliability problems has been developed. In the paper, the focus is on the displacement response of a linear oscillator driven by white noise. Failure is then assumed to occur when the displacement response exceeds a critical threshold.

  15. Ultimate parameters of the photon collider at the international linear ...

    be achieved by adding more wigglers to the DRs; the incremental cost is easily ... the above emittances, the limit on the effective horizontal β-function is about 5 mm [12 .... coupling in γγ collisions just above the γγ → hh threshold [19]. .... [21] V I Telnov, talk at the ECFA Workshop on Linear Colliders, Montpellier, France, 12–.

  16. Semidefinite linear complementarity problems

    Eckhardt, U.

    1978-04-01

    Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de

  17. Linear algebra done right

    Axler, Sheldon

    2015-01-01

    This best-selling textbook for a second course in linear algebra is aimed at undergrad math majors and graduate students. The novel approach taken here banishes determinants to the end of the book. The text focuses on the central goal of linear algebra: understanding the structure of linear operators on finite-dimensional vector spaces. The author has taken unusual care to motivate concepts and to simplify proofs. A variety of interesting exercises in each chapter helps students understand and manipulate the objects of linear algebra. The third edition contains major improvements and revisions throughout the book. More than 300 new exercises have been added since the previous edition. Many new examples have been added to illustrate the key ideas of linear algebra. New topics covered in the book include product spaces, quotient spaces, and dual spaces. Beautiful new formatting creates pages with an unusually pleasant appearance in both print and electronic versions. No prerequisites are assumed other than the ...

  18. Approach to DOE threshold guidance limits

    Shuman, R.D.; Wickham, L.E.

    1984-01-01

    The need for less restrictive criteria governing disposal of extremely low-level radioactive waste has long been recognized. The Low-Level Waste Management Program has been directed by the Department of Energy (DOE) to aid in the development of a threshold guidance limit for DOE low-level waste facilities. Project objectives are concernd with the definition of a threshold limit dose and pathway analysis of radionuclide transport within selected exposure scenarios at DOE sites. Results of the pathway analysis will be used to determine waste radionuclide concentration guidelines that meet the defined threshold limit dose. Methods of measurement and verification of concentration limits round out the project's goals. Work on defining a threshold limit dose is nearing completion. Pathway analysis of sanitary landfill operations at the Savannah River Plant and the Idaho National Engineering Laboratory is in progress using the DOSTOMAN computer code. Concentration limit calculations and determination of implementation procedures shall follow completion of the pathways work. 4 references

  19. Pion photoproduction on the nucleon at threshold

    Cheon, I.T.; Jeong, M.T.

    1989-08-01

    Electric dipole amplitudes of pion photoproduction on the nucleon at threshold have been calculated in the framework of the chiral bag model. Our results are in good agreement with the existing experimental data

  20. Effect of dissipation on dynamical fusion thresholds

    Sierk, A.J.

    1986-01-01

    The existence of dynamical thresholds to fusion in heavy nuclei (A greater than or equal to 200) due to the nature of the potential-energy surface is shown. These thresholds exist even in the absence of dissipative forces, due to the coupling between the various collective deformation degrees of freedom. Using a macroscopic model of nuclear shape dynamics, It is shown how three different suggested dissipation mechanisms increase by varying amounts the excitation energy over the one-dimensional barrier required to cause compound-nucleus formation. The recently introduced surface-plus-window dissipation may give a reasonable representation of experimental data on fusion thresholds, in addition to properly describing fission-fragment kinetic energies and isoscalar giant multipole widths. Scaling of threshold results to asymmetric systems is discussed. 48 refs., 10 figs

  1. 40 CFR 98.411 - Reporting threshold.

    2010-07-01

    ...) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.411 Reporting threshold. Any supplier of industrial greenhouse gases who meets the requirements of § 98.2(a)(4) must report GHG...

  2. Secure information management using linguistic threshold approach

    Ogiela, Marek R

    2013-01-01

    This book details linguistic threshold schemes for information sharing. It examines the opportunities of using these techniques to create new models of managing strategic information shared within a commercial organisation or a state institution.

  3. Robust Adaptive Thresholder For Document Scanning Applications

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  4. Recent progress in understanding climate thresholds

    Good, Peter; Bamber, Jonathan; Halladay, Kate; Harper, Anna B.; Jackson, Laura C.; Kay, Gillian; Kruijt, Bart; Lowe, Jason A.; Phillips, Oliver L.; Ridley, Jeff; Srokosz, Meric; Turley, Carol; Williamson, Phillip

    2018-01-01

    This article reviews recent scientific progress, relating to four major systems that could exhibit threshold behaviour: ice sheets, the Atlantic meridional overturning circulation (AMOC), tropical forests and ecosystem responses to ocean acidification. The focus is on advances since the

  5. Verifiable Secret Redistribution for Threshold Sharing Schemes

    Wong, Theodore M; Wang, Chenxi; Wing, Jeannette M

    2002-01-01

    .... Our protocol guards against dynamic adversaries. We observe that existing protocols either cannot be readily extended to allow redistribution between different threshold schemes, or have vulnerabilities that allow faulty old shareholders...

  6. Handbook on linear motor application

    1988-10-01

    This book guides the application for Linear motor. It lists classification and speciality of Linear Motor, terms of linear-induction motor, principle of the Motor, types on one-side linear-induction motor, bilateral linear-induction motor, linear-DC Motor on basic of the motor, linear-DC Motor for moving-coil type, linear-DC motor for permanent-magnet moving type, linear-DC motor for electricity non-utility type, linear-pulse motor for variable motor, linear-pulse motor for permanent magneto type, linear-vibration actuator, linear-vibration actuator for moving-coil type, linear synchronous motor, linear electromagnetic motor, linear electromagnetic solenoid, technical organization and magnetic levitation and linear motor and sensor.

  7. Noise thresholds for optical quantum computers.

    Dawson, Christopher M; Haselgrove, Henry L; Nielsen, Michael A

    2006-01-20

    In this Letter we numerically investigate the fault-tolerant threshold for optical cluster-state quantum computing. We allow both photon loss noise and depolarizing noise (as a general proxy for all local noise), and obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible for photon loss probabilities <3 x 10(-3), and for depolarization probabilities <10(-4).

  8. Design of Threshold Controller Based Chaotic Circuits

    Mohamed, I. Raja; Murali, K.; Sinha, Sudeshna

    2010-01-01

    We propose a very simple implementation of a second-order nonautonomous chaotic oscillator, using a threshold controller as the only source of nonlinearity. We demonstrate the efficacy and simplicity of our design through numerical and experimental results. Further, we show that this approach...... of using a threshold controller as a nonlinear element, can be extended to obtain autonomous and multiscroll chaotic attractor circuits as well....

  9. A New Wavelet Threshold Function and Denoising Application

    Lu Jing-yi

    2016-01-01

    Full Text Available In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR and mean square errors (MSE of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

  10. Effects of programming threshold and maplaw settings on acoustic thresholds and speech discrimination with the MED-EL COMBI 40+ cochlear implant.

    Boyd, Paul J

    2006-12-01

    The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical

  11. Linear ubiquitination in immunity.

    Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning

    2015-07-01

    Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.

  12. Automating linear accelerator quality assurance

    Eckhause, Tobias; Thorwarth, Ryan; Moran, Jean M.; Al-Hallaq, Hania; Farrey, Karl; Ritter, Timothy; DeMarco, John; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Park, SungYong; Perez, Mario; Booth, Jeremy T.

    2015-01-01

    Purpose: The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. Methods: The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. Results: For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The

  13. Automating linear accelerator quality assurance

    Eckhause, Tobias; Thorwarth, Ryan; Moran, Jean M., E-mail: jmmoran@med.umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109-5010 (United States); Al-Hallaq, Hania; Farrey, Karl [Department of Radiation Oncology and Cellular Oncology, The University of Chicago, Chicago, Illinois 60637 (United States); Ritter, Timothy [Ann Arbor VA Medical Center, Ann Arbor, Michigan 48109 (United States); DeMarco, John [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California, 90048 (United States); Pawlicki, Todd; Kim, Gwe-Ya [UCSD Medical Center, La Jolla, California 92093 (United States); Popple, Richard [Department of Radiation Oncology, University of Alabama Birmingham, Birmingham, Alabama 35249 (United States); Sharma, Vijeshwar; Park, SungYong [Karmanos Cancer Institute, McLaren-Flint, Flint, Michigan 48532 (United States); Perez, Mario; Booth, Jeremy T. [Royal North Shore Hospital, Sydney, NSW 2065 (Australia)

    2015-10-15

    Purpose: The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. Methods: The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. Results: For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The

  14. Linearizing W-algebras

    Krivonos, S.O.; Sorin, A.S.

    1994-06-01

    We show that the Zamolodchikov's and Polyakov-Bershadsky nonlinear algebras W 3 and W (2) 3 can be embedded as subalgebras into some linear algebras with finite set of currents. Using these linear algebras we find new field realizations of W (2) 3 and W 3 which could be a starting point for constructing new versions of W-string theories. We also reveal a number of hidden relationships between W 3 and W (2) 3 . We conjecture that similar linear algebras can exist for other W-algebra as well. (author). 10 refs

  15. Matrices and linear algebra

    Schneider, Hans

    1989-01-01

    Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t

  16. Linearity in Process Languages

    Nygaard, Mikkel; Winskel, Glynn

    2002-01-01

    The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....

  17. Elements of linear space

    Amir-Moez, A R; Sneddon, I N

    1962-01-01

    Elements of Linear Space is a detailed treatment of the elements of linear spaces, including real spaces with no more than three dimensions and complex n-dimensional spaces. The geometry of conic sections and quadric surfaces is considered, along with algebraic structures, especially vector spaces and transformations. Problems drawn from various branches of geometry are given.Comprised of 12 chapters, this volume begins with an introduction to real Euclidean space, followed by a discussion on linear transformations and matrices. The addition and multiplication of transformations and matrices a

  18. Applied linear regression

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  19. Universal squash model for optical communications using linear optics and threshold detectors

    Fung, Chi-Hang Fred; Chau, H. F.; Lo, Hoi-Kwong

    2011-01-01

    Transmission of photons through open-air or optical fibers is an important primitive in quantum-information processing. Theoretical descriptions of this process often consider single photons as information carriers and thus fail to accurately describe experimental implementations where any number of photons may enter a detector. It has been a great challenge to bridge this big gap between theory and experiments. One powerful method for achieving this goal is by conceptually squashing the received multiphoton states to single-photon states. However, until now, only a few protocols admit a squash model; furthermore, a recently proven no-go theorem appears to rule out the existence of a universal squash model. Here we show that a necessary condition presumed by all existing squash models is in fact too stringent. By relaxing this condition, we find that, rather surprisingly, a universal squash model actually exists for many protocols, including quantum key distribution, quantum state tomography, Bell's inequality testing, and entanglement verification.

  20. Common misinterpretations of the 'linear, no-threshold' relationship used in radiation protection

    Bond, V.P.; Sondhaus, C.A.

    1987-01-01

    Absorbed dose D is shown to be a composite variable, the product of the fraction of cells hit (I H ) and the mean ''dose'' (hit size) anti z to those cells. D is suitable for use with high level exposure (HLE) to radiation and its resulting acute organ effects because, since I H =1.0, it approximates closely enough the mean energy density in the cell as well as in the organ. However, the low level exposure (LLE) to radiation and its consequent probability of cancer induction from a single cell, stochastic delivery of energy to cells results in a wide distribution of hit sizes z, and the expected mean value, anti z, is constant with exposure. Thus, with LLE, only I H varies with D so that the apparent proportionality between ''dose'' and the fraction of cells transformed is misleading. This proportionality therefore does not mean that any (cell) dose, no matter how small, can be lethal. Rather, it means that, in the exposure of a population of individual organisms consisting of the constituent relevant cells, there is a small probability of particle-cell ineractions which transfer energy. The probability of a cell transforming and initiating a cancer can only be greater than zero if the hi t size (''dose'') to the cell is large enough. Otherwise stated, if the ''dose'' is defined at the proper level of biological organization, namely, the cell and not the organ, only a large dose z to that cell is effective. (orig.)

  1. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  2. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  3. Linear study of the precessional fishbone instability

    Idouakass, M.; Faganello, M.; Berk, H. L.; Garbet, X.; Benkadda, S.

    2016-10-01

    The precessional fishbone instability is an m = n = 1 internal kink mode destabilized by a population of trapped energetic particles. The linear phase of this instability is studied here, analytically and numerically, with a simplified model. This model uses the reduced magneto-hydrodynamics equations for the bulk plasma and the Vlasov equation for a population of energetic particles with a radially decreasing density. A threshold condition for the instability is found, as well as a linear growth rate and frequency. It is shown that the mode frequency is given by the precession frequency of the deeply trapped energetic particles at the position of strongest radial gradient. The growth rate is shown to scale with the energetic particle density and particle energy while it is decreased by continuum damping.

  4. RF power generation for future linear colliders

    Fowkes, W.R.; Allen, M.A.; Callin, R.S.; Caryotakis, G.; Eppley, K.R.; Fant, K.S.; Farkas, Z.D.; Feinstein, J.; Ko, K.; Koontz, R.F.; Kroll, N.; Lavine, T.L.; Lee, T.G.; Miller, R.H.; Pearson, C.; Spalek, G.; Vlieks, A.E.; Wilson, P.B.

    1990-06-01

    The next linear collider will require 200 MW of rf power per meter of linac structure at relatively high frequency to produce an accelerating gradient of about 100 MV/m. The higher frequencies result in a higher breakdown threshold in the accelerating structure hence permit higher accelerating gradients per meter of linac. The lower frequencies have the advantage that high peak power rf sources can be realized. 11.42 GHz appears to be a good compromise and the effort at the Stanford Linear Accelerator Center (SLAC) is being concentrated on rf sources operating at this frequency. The filling time of the accelerating structure for each rf feed is expected to be about 80 ns. Under serious consideration at SLAC is a conventional klystron followed by a multistage rf pulse compression system, and the Crossed-Field Amplifier. These are discussed in this paper

  5. Linear system theory

    Callier, Frank M.; Desoer, Charles A.

    1991-01-01

    The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.

  6. Representation of dynamical stimuli in populations of threshold neurons.

    Tatjana Tchumatchenko

    2011-10-01

    Full Text Available Many sensory or cognitive events are associated with dynamic current modulations in cortical neurons. This raises an urgent demand for tractable model approaches addressing the merits and limits of potential encoding strategies. Yet, current theoretical approaches addressing the response to mean- and variance-encoded stimuli rarely provide complete response functions for both modes of encoding in the presence of correlated noise. Here, we investigate the neuronal population response to dynamical modifications of the mean or variance of the synaptic bombardment using an alternative threshold model framework. In the variance and mean channel, we provide explicit expressions for the linear and non-linear frequency response functions in the presence of correlated noise and use them to derive population rate response to step-like stimuli. For mean-encoded signals, we find that the complete response function depends only on the temporal width of the input correlation function, but not on other functional specifics. Furthermore, we show that both mean- and variance-encoded signals can relay high-frequency inputs, and in both schemes step-like changes can be detected instantaneously. Finally, we obtain the pairwise spike correlation function and the spike triggered average from the linear mean-evoked response function. These results provide a maximally tractable limiting case that complements and extends previous results obtained in the integrate and fire framework.

  7. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    Lori Townsend

    2016-06-01

    Full Text Available This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fifty potential threshold concepts, finally settling on six information literacy threshold concepts.

  8. QRS Detection Based on Improved Adaptive Threshold

    Xuanyu Lu

    2018-01-01

    Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.

  9. Cost-effectiveness thresholds: pros and cons.

    Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-12-01

    Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.

  10. At-Risk-of-Poverty Threshold

    Táňa Dvornáková

    2012-06-01

    Full Text Available European Statistics on Income and Living Conditions (EU-SILC is a survey on households’ living conditions. The main aim of the survey is to get long-term comparable data on social and economic situation of households. Data collected in the survey are used mainly in connection with the evaluation of income poverty and determinationof at-risk-of-poverty rate. This article deals with the calculation of the at risk-of-poverty threshold based on data from EU-SILC 2009. The main task is to compare two approaches to the computation of at riskof-poverty threshold. The first approach is based on the calculation of the threshold for each country separately,while the second one is based on the calculation of the threshold for all states together. The introduction summarizes common attributes in the calculation of the at-risk-of-poverty threshold, such as disposable household income, equivalised household income. Further, different approaches to both calculations are introduced andadvantages and disadvantages of these approaches are stated. Finally, the at-risk-of-poverty rate calculation is described and comparison of the at-risk-of-poverty rates based on these two different approaches is made.

  11. Threshold concepts in finance: student perspectives

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-10-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by finance academics. In addition, we investigate the potential of a framework of different types of knowledge to differentiate the delivery of the finance curriculum and the role of modelling in finance. Our purpose is to identify ways to improve curriculum design and delivery, leading to better student outcomes. Whilst we find that there is significant overlap between what students identify as important in finance and the threshold concepts identified by academics, much of this overlap is expressed by indirect reference to the concepts. Further, whilst different types of knowledge are apparent in the student data, there is evidence that students do not necessarily distinguish conceptual from other types of knowledge. As well as investigating the finance curriculum, the research demonstrates the use of threshold concepts to compare and contrast student and academic perceptions of a discipline and, as such, is of interest to researchers in education and other disciplines.

  12. The influence of thresholds on the risk assessment of carcinogens in food.

    Pratt, Iona; Barlow, Susan; Kleiner, Juliane; Larsen, John Christian

    2009-08-01

    The risks from exposure to chemical contaminants in food must be scientifically assessed, in order to safeguard the health of consumers. Risk assessment of chemical contaminants that are both genotoxic and carcinogenic presents particular difficulties, since the effects of such substances are normally regarded as being without a threshold. No safe level can therefore be defined, and this has implications for both risk management and risk communication. Risk management of these substances in food has traditionally involved application of the ALARA (As Low as Reasonably Achievable) principle, however ALARA does not enable risk managers to assess the urgency and extent of the risk reduction measures needed. A more refined approach is needed, and several such approaches have been developed. Low-dose linear extrapolation from animal carcinogenicity studies or epidemiological studies to estimate risks for humans at low exposure levels has been applied by a number of regulatory bodies, while more recently the Margin of Exposure (MOE) approach has been applied by both the European Food Safety Authority and the Joint FAO/WHO Expert Committee on Food Additives. A further approach is the Threshold of Toxicological Concern (TTC), which establishes exposure thresholds for chemicals present in food, dependent on structure. Recent experimental evidence that genotoxic responses may be thresholded has significant implications for the risk assessment of chemicals that are both genotoxic and carcinogenic. In relation to existing approaches such as linear extrapolation, MOE and TTC, the existence of a threshold reduces the uncertainties inherent in such methodology and improves confidence in the risk assessment. However, for the foreseeable future, regulatory decisions based on the concept of thresholds for genotoxic carcinogens are likely to be taken case-by-case, based on convincing data on the Mode of Action indicating that the rate limiting variable for the development of cancer

  13. Protograph based LDPC codes with minimum distance linearly growing with block size

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  14. Polymer translocation under a pulling force: Scaling arguments and threshold forces

    Menais, Timothée

    2018-02-01

    DNA translocation through nanopores is one of the most promising strategies for next-generation sequencing technologies. Most experimental and numerical works have focused on polymer translocation biased by electrophoresis, where a pulling force acts on the polymer within the nanopore. An alternative strategy, however, is emerging, which uses optical or magnetic tweezers. In this case, the pulling force is exerted directly at one end of the polymer, which strongly modifies the translocation process. In this paper, we report numerical simulations of both linear and structured (mimicking DNA) polymer models, simple enough to allow for a statistical treatment of the pore structure effects on the translocation time probability distributions. Based on extremely extended computer simulation data, we (i) propose scaling arguments for an extension of the predicted translocation times τ ˜N2F-1 over the moderate forces range and (ii) analyze the effect of pore size and polymer structuration on translocation times τ .

  15. White Light Generation and Anisotropic Damage in Gold Films near Percolation Threshold

    Novikov, Sergey M.; Frydendahl, Christian; Beermann, Jonas

    2017-01-01

    in vanishingly small gaps between gold islands in thin films near the electrically determined percolation threshold. Optical explorations using two-photon luminescence (TPL) and near-field microscopies reveals supercubic TPL power dependencies with white-light spectra, establishing unequivocally...... that the strongest TPL signals are generated close to the percolation threshold films, and occurrence of extremely confined (similar to 30 nm) and strongly enhanced (similar to 100 times) fields at the illumination wavelength. For linearly polarized and sufficiently powerful light, we observe pronounced optical...

  16. Spectral singularities, threshold gain, and output intensity for a slab laser with mirrors

    Doğan, Keremcan; Mostafazadeh, Ali; Sarısaman, Mustafa

    2018-05-01

    We explore the consequences of the emergence of linear and nonlinear spectral singularities in TE modes of a homogeneous slab of active optical material that is placed between two mirrors. We use the results together with two basic postulates regarding the behavior of laser light emission to derive explicit expressions for the laser threshold condition and output intensity for these modes of the slab and discuss their physical implications. In particular, we reveal the details of the dependence of the threshold gain and output intensity on the position and properties of the mirrors and on the real part of the refractive index of the gain material.

  17. A Search for Laser Emission with Megawatt Thresholds from 5600 FGKM Stars

    Tellis, Nathaniel K.; Marcy, Geoffrey W., E-mail: Nate.tellis@gmail.com [Astronomy Department, University of California, Berkeley, CA 94720 (United States)

    2017-06-01

    We searched high-resolution spectra of 5600 nearby stars for emission lines that are both inconsistent with a natural origin and unresolved spatially, as would be expected from extraterrestrial optical lasers. The spectra were obtained with the Keck 10 m telescope, including light coming from within 0.5 arcsec of the star, corresponding typically to within a few to tens of astronomical units of the star, and covering nearly the entire visible wavelength range from 3640 to 7890 Å. We establish detection thresholds by injecting synthetic laser emission lines into our spectra and blindly analyzing them for detections. We compute flux density detection thresholds for all wavelengths and spectral types sampled. Our detection thresholds for the power of the lasers themselves range from 3 kW to 13 MW, independent of distance to the star but dependent on the competing “glare” of the spectral energy distribution of the star and on the wavelength of the laser light, launched from a benchmark, diffraction-limited 10 m class telescope. We found no such laser emission coming from the planetary region around any of the 5600 stars. Because they contain roughly 2000 lukewarm, Earth-size planets, we rule out models of the Milky Way in which over 0.1% of warm, Earth-size planets harbor technological civilizations that, intentionally or not, are beaming optical lasers toward us. A next-generation spectroscopic laser search will be done by the Breakthrough Listen initiative, targeting more stars, especially stellar types overlooked here including spectral types O, B, A, early F, late M, and brown dwarfs, and astrophysical exotica.

  18. A Search for Laser Emission with Megawatt Thresholds from 5600 FGKM Stars

    Tellis, Nathaniel K.; Marcy, Geoffrey W.

    2017-06-01

    We searched high-resolution spectra of 5600 nearby stars for emission lines that are both inconsistent with a natural origin and unresolved spatially, as would be expected from extraterrestrial optical lasers. The spectra were obtained with the Keck 10 m telescope, including light coming from within 0.5 arcsec of the star, corresponding typically to within a few to tens of astronomical units of the star, and covering nearly the entire visible wavelength range from 3640 to 7890 Å. We establish detection thresholds by injecting synthetic laser emission lines into our spectra and blindly analyzing them for detections. We compute flux density detection thresholds for all wavelengths and spectral types sampled. Our detection thresholds for the power of the lasers themselves range from 3 kW to 13 MW, independent of distance to the star but dependent on the competing “glare” of the spectral energy distribution of the star and on the wavelength of the laser light, launched from a benchmark, diffraction-limited 10 m class telescope. We found no such laser emission coming from the planetary region around any of the 5600 stars. Because they contain roughly 2000 lukewarm, Earth-size planets, we rule out models of the Milky Way in which over 0.1% of warm, Earth-size planets harbor technological civilizations that, intentionally or not, are beaming optical lasers toward us. A next-generation spectroscopic laser search will be done by the Breakthrough Listen initiative, targeting more stars, especially stellar types overlooked here including spectral types O, B, A, early F, late M, and brown dwarfs, and astrophysical exotica.

  19. Near-threshold fatigue crack growth behavior of AISI 316 stainless steel

    Tobler, R.L.

    1986-01-01

    The near-threshold fatigue behavior of an AISI 316 alloy was characterized using a newly developed, fully automatic fatigue test apparatus. Significant differences in the near-threshold behavior at temperatures of 295 and 4 K are observed. At 295 K, where the operationally defined threshold at 10 -10 m/cycle is insensitive contains stress ratio and strongly affected by crack closure, the effective threshold stress intensity factor (ΔK/sub Th/)/sub eff/) is about 4.65 MPa m/sub 1/2/ at R = 0.3. At 4 K, the threshold is higher, crack closure is less pronounced, and there is a stress ratio dependency: (ΔK/sub Th/)/sub eff/ is 5.1 MPa m/sup 1/2/ at R = 0.3 and 6.1 MPa m/sup 1/2/ at R - 0.1. There is also a significant difference in the form of the da/dN-versus-ΔK curves on log-log coordinates: at 4 K the curve has the expected sigmoidal shape, but at 295 K the trend is linear over the region of da/dN from 10 -7 to 10 -10 m/cycle. Other results suggest that the near-threshold measurements of a 6.4-mm-thick specimen of this alloy are insensitive to cyclic test frequency below 40 Hz

  20. Progress towards the design of a next linear collider

    Ruth, R.D.

    1990-06-01

    The purpose of this paper is to review the ongoing research at SLAC toward the design of a next-generation linear collider (NLC). The energy of the collider is taken to be 0.5 TeV in the CM with a view towards upgrading to 1.0 TeV. The luminosity is in the range of 10 33 to 10 34 cm -2 sec -1 . The energy is achieved by acceleration with a gradient of about a factor of five higher than SLC, which yields a linear collider approximately twice as long as SLC. The detailed trade-off length and acceleration will be based on total cost. A very broad optimum occurs when the total linear costs equals the total cost of RF power. The luminosity of the linear collider is obtained basically in two ways. First, the cross-sectional area of the beam is decreased primarily by decreasing the vertical size. This creates a flat beam and is useful for controlling beamstrahlung. Secondly, several bunches (∼10) are accelerated on each RF fill in order to more efficiently extract energy from the RF structure. This effectively increases the repetition rate by an order of magnitude. In the next several sections, we trace the beam through the collider to review the research program at SLAC. 41 refs., 1 fig

  1. Compartmentalization in environmental science and the perversion of multiple thresholds

    Burkart, W. [Institute of Radiation Hygiene of the Federal Office for Radiation Protection, Ingolstaedter Landstr. 1, D 85716 Oberschleissheim, Muenchen (Germany)

    2000-04-17

    Nature and living organisms are separated into compartments. The self-assembly of phospholipid micelles was as fundamental to the emergence of life and evolution as the formation of DNA precursors and their self-replication. Also, modern science owes much of its success to the study of single compartments, the dissection of complex structures and event chains into smaller study objects which can be manipulated with a set of more and more sophisticated equipment. However, in environmental science, these insights are obtained at a price: firstly, it is difficult to recognize, let alone to take into account what is lost during fragmentation and dissection; and secondly, artificial compartments such as scientific disciplines become self-sustaining, leading to new and unnecessary boundaries, subtly framing scientific culture and impeding progress in holistic understanding. The long-standing but fruitless quest to define dose-effect relationships and thresholds for single toxic agents in our environment is a central part of the problem. Debating single-agent toxicity in splendid isolation is deeply flawed in view of a modern world where people are exposed to low levels of a multitude of genotoxic and non-genotoxic agents. Its potential danger lies in the unwarranted postulation of separate thresholds for agents with similar action. A unifying concept involving toxicology and radiation biology is needed for a full mechanistic assessment of environmental health risks. The threat of synergism may be less than expected, but this may also hold for the safety margin commonly thought to be a consequence of linear no-threshold dose-effect relationship assumptions.

  2. Stimulated Brillouin scattering threshold in fiber amplifiers

    Liang Liping; Chang Liping

    2011-01-01

    Based on the wave coupling theory and the evolution model of the critical pump power (or Brillouin threshold) for stimulated Brillouin scattering (SBS) in double-clad fiber amplifiers, the influence of signal bandwidth, fiber-core diameter and amplifier gain on SBS threshold is simulated theoretically. And experimental measurements of SBS are presented in ytterbium-doped double-clad fiber amplifiers with single-frequency hundred nanosecond pulse amplification. Under different input signal pulses, the forward amplified pulse distortion is observed when the pulse energy is up to 660 nJ and the peak power is up to 3.3 W in the pulse amplification with pulse duration of 200 ns and repetition rate of 1 Hz. And the backward SBS narrow pulse appears. The pulse peak power equals to SBS threshold. Good agreement is shown between the modeled and experimental data. (authors)

  3. Threshold Theory Tested in an Organizational Setting

    Christensen, Bo T.; Hartmann, Peter V. W.; Hedegaard Rasmussen, Thomas

    2017-01-01

    A large sample of leaders (N = 4257) was used to test the link between leader innovativeness and intelligence. The threshold theory of the link between creativity and intelligence assumes that below a certain IQ level (approximately IQ 120), there is some correlation between IQ and creative...... potential, but above this cutoff point, there is no correlation. Support for the threshold theory of creativity was found, in that the correlation between IQ and innovativeness was positive and significant below a cutoff point of IQ 120. Above the cutoff, no significant relation was identified, and the two...... correlations differed significantly. The finding was stable across distinct parts of the sample, providing support for the theory, although the correlations in all subsamples were small. The findings lend support to the existence of threshold effects using perceptual measures of behavior in real...

  4. Effects of pulse duration on magnetostimulation thresholds

    Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800 (Turkey); National Magnetic Resonance Research Center (UMRAM), Bilkent University, Bilkent, Ankara 06800 (Turkey); Goodwill, Patrick W. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Conolly, Steven M. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of EECS, University of California, Berkeley, California 94720-1762 (United States)

    2015-06-15

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  5. THRESHOLD PARAMETER OF THE EXPECTED LOSSES

    Josip Arnerić

    2012-12-01

    Full Text Available The objective of extreme value analysis is to quantify the probabilistic behavior of unusually large losses using only extreme values above some high threshold rather than using all of the data which gives better fit to tail distribution in comparison to traditional methods with assumption of normality. In our case we estimate market risk using daily returns of the CROBEX index at the Zagreb Stock Exchange. Therefore, it’s necessary to define the excess distribution above some threshold, i.e. Generalized Pareto Distribution (GPD is used as much more reliable than the normal distribution due to the fact that gives the accent on the extreme values. Parameters of GPD distribution will be estimated using maximum likelihood method (MLE. The contribution of this paper is to specify threshold which is large enough so that GPD approximation valid but low enough so that a sufficient number of observations are available for a precise fit.

  6. Effects of pulse duration on magnetostimulation thresholds

    Saritas, Emine U.; Goodwill, Patrick W.; Conolly, Steven M.

    2015-01-01

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  7. Determining lower threshold concentrations for synergistic effects

    Bjergager, Maj-Britt Andersen; Dalhoff, Kristoffer; Kretschmann, Andreas

    2017-01-01

    which proven synergists cease to act as synergists towards the aquatic crustacean Daphnia magna. To do this, we compared several approaches and test-setups to evaluate which approach gives the most conservative estimate for the lower threshold for synergy for three known azole synergists. We focus...... on synergistic interactions between the pyrethroid insecticide, alpha-cypermethrin, and one of the three azole fungicides prochloraz, propiconazole or epoxiconazole measured on Daphnia magna immobilization. Three different experimental setups were applied: A standard 48h acute toxicity test, an adapted 48h test...... of immobile organisms increased more than two-fold above what was predicted by independent action (vertical assessment). All three tests confirmed the hypothesis of the existence of a lower azole threshold concentration below which no synergistic interaction was observed. The lower threshold concentration...

  8. Dynamical processes and epidemic threshold on nonlinear coupled multiplex networks

    Gao, Chao; Tang, Shaoting; Li, Weihua; Yang, Yaqian; Zheng, Zhiming

    2018-04-01

    Recently, the interplay between epidemic spreading and awareness diffusion has aroused the interest of many researchers, who have studied models mainly based on linear coupling relations between information and epidemic layers. However, in real-world networks the relation between two layers may be closely correlated with the property of individual nodes and exhibits nonlinear dynamical features. Here we propose a nonlinear coupled information-epidemic model (I-E model) and present a comprehensive analysis in a more generalized scenario where the upload rate differs from node to node, deletion rate varies between susceptible and infected states, and infection rate changes between unaware and aware states. In particular, we develop a theoretical framework of the intra- and inter-layer dynamical processes with a microscopic Markov chain approach (MMCA), and derive an analytic epidemic threshold. Our results suggest that the change of upload and deletion rate has little effect on the diffusion dynamics in the epidemic layer.

  9. Quantitative miRNA expression analysis: comparing microarrays with next-generation sequencing

    Willenbrock, Hanni; Salomon, Jesper; Søkilde, Rolf

    2009-01-01

    Recently, next-generation sequencing has been introduced as a promising, new platform for assessing the copy number of transcripts, while the existing microarray technology is considered less reliable for absolute, quantitative expression measurements. Nonetheless, so far, results from the two...... technologies have only been compared based on biological data, leading to the conclusion that, although they are somewhat correlated, expression values differ significantly. Here, we use synthetic RNA samples, resembling human microRNA samples, to find that microarray expression measures actually correlate...... better with sample RNA content than expression measures obtained from sequencing data. In addition, microarrays appear highly sensitive and perform equivalently to next-generation sequencing in terms of reproducibility and relative ratio quantification....

  10. Next-Generation Bio-Products Sowing the Seeds of Success for Sustainable Agriculture

    Henry Müller

    2013-10-01

    Full Text Available Plants have recently been recognized as meta-organisms due to a close symbiotic relationship with their microbiome. Comparable to humans and other eukaryotic hosts, plants also harbor a “second genome” that fulfills important host functions. These advances were driven by both “omics”-technologies guided by next-generation sequencing and microscopic insights. Additionally, these new results influence applied fields such as biocontrol and stress protection in agriculture, and new tools may impact (i the detection of new bio-resources for biocontrol and plant growth promotion, (ii the optimization of fermentation and formulation processes for biologicals, (iii stabilization of the biocontrol effect under field conditions, and (iv risk assessment studies for biotechnological applications. Examples are presented and discussed for the fields mentioned above, and next-generation bio-products were found as a sustainable alternative for agriculture.

  11. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  12. The effect of power change on the PCI failure threshold

    Sipush, P J; Kaiser, R S [Westinghouse Nuclear Fuel Division, Pittsburg, PA (United States)

    1983-06-01

    Investigations of the PCI mechanism have led to the conclusion that the failure threshold is best defined by the power change ({delta}P) during the ramp, rather than the final power achieved at the end of the ramp. The data base studied was comprehensive and includes a wide variety of water reactor systems and fuel designs. It has also been found that operating parameters have a more significant effect on failure susceptibility than fuel rod design variables. The most significant operating variable affecting the failure threshold was found to be the base irradiation history, indicating that fission product release and migration prior to the ramp (during base irradiation) is an important consideration. It can be shown that fuel irradiated at relatively higher linear heat ratings ends to fail at lower {delta}P. This effect has also been independently verified by statistical analyses which will also be discussed. Industry out-of-pile internal gas pressurization tests with irradiated tubing in the absence of simulated fission product species and at low stress levels, also tends to indicate the importance of the prior irradiation history on PCI performance. Other parameters that affect the power ramping performance are the initial ramping power and the pellet power distribution which is a function of fuel enrichment and burnup. (author)

  13. Thresholds of parametric instabilities near the lower hybrid frequency

    Berger, R.L.; Perkins, F.W.

    1975-06-01

    Resonant decay instabilities of a pump wave with frequency ω 0 near the lower-hybrid frequency ω/sub LH/ are analyzed with respect to the wavenumber k of the decay waves and the ratio ω 0 /ω/sub LH/ to determine the decay process with the minimum threshold. It was found that the lowest thresholds are for decay into an electron plasma (lower hybrid) wave plus either a backward ion-cyclotron wave, an ion Bernstein wave, or a low frequency sound wave. For ω 0 less than (2ω/sub LH/)/sup 1 / 2 /, it was found that these decay processes can occur and have faster growth than ion quasimodes provided the drift velocity (cE 0 /B 0 ) is much less than the sound speed. In many cases of interest, electromagnetic corrections to the lower-hybrid wave rule out decay into all but short wavelength (k rho/sub i/ greater than 1) waves. The experimental results are consistent with the linear theory of parametric instabilities in a homogeneous plasma. (U.S.)

  14. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Further linear algebra

    Blyth, T S

    2002-01-01

    Most of the introductory courses on linear algebra develop the basic theory of finite­ dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num­ ber of illustrative and worked examples, as well as many exercises that are strategi­ cally placed throughout the text. Solutions to the ex...

  16. The threshold photoelectron spectrum of mercury

    Rojas, H; Dawber, G; Gulley, N; King, G C; Bowring, N; Ward, R

    2013-01-01

    The threshold photoelectron spectrum of mercury has been recorded over the energy range (10–40 eV) which covers the region from the lowest state of the singly charged ion, 5d 10 6s( 2 S 1/2 ), to the double charged ionic state, 5d 9 ( 2 D 3/2 )6s( 1 D 2 ). Synchrotron radiation has been used in conjunction with the penetrating-field threshold-electron technique to obtain the spectrum with high resolution. The spectrum shows many more features than observed in previous photoemission measurements with many of these assigned to satellite states converging to the double ionization limit. (paper)

  17. Near threshold expansion of Feynman diagrams

    Mendels, E.

    2005-01-01

    The near threshold expansion of Feynman diagrams is derived from their configuration space representation, by performing all x integrations. The general scalar Feynman diagram is considered, with an arbitrary number of external momenta, an arbitrary number of internal lines and an arbitrary number of loops, in n dimensions and all masses may be different. The expansions are considered both below and above threshold. Rules, giving real and imaginary part, are derived. Unitarity of a sunset diagram with I internal lines is checked in a direct way by showing that its imaginary part is equal to the phase space integral of I particles

  18. Thresholds in Xeric Hydrology and Biogeochemistry

    Meixner, T.; Brooks, P. D.; Simpson, S. C.; Soto, C. D.; Yuan, F.; Turner, D.; Richter, H.

    2011-12-01

    Due to water limitation, thresholds in hydrologic and biogeochemical processes are common in arid and semi-arid systems. Some of these thresholds such as those focused on rainfall runoff relationships have been well studied. However to gain a full picture of the role that thresholds play in driving the hydrology and biogeochemistry of xeric systems a full view of the entire array of processes at work is needed. Here a walk through the landscape of xeric systems will be conducted illustrating the powerful role of hydrologic thresholds on xeric system biogeochemistry. To understand xeric hydro-biogeochemistry two key ideas need to be focused on. First, it is important to start from a framework of reaction and transport. Second an understanding of the temporal and spatial components of thresholds that have a large impact on hydrologic and biogeochemical fluxes needs to be offered. In the uplands themselves episodic rewetting and drying of soils permits accelerated biogeochemical processing but also more gradual drainage of water through the subsurface than expected in simple conceptions of biogeochemical processes. Hydrologic thresholds (water content above hygroscopic) results in a stop start nutrient spiral of material across the landscape since runoff connecting uplands to xeric perennial riparian is episodic and often only transports materials a short distance (100's of m). This episodic movement results in important and counter-intuitive nutrient inputs to riparian zones but also significant processing and uptake of nutrients. The floods that transport these biogeochemicals also result in significant input to riparian groundwater and may be key to sustaining these critical ecosystems. Importantly the flood driven recharge process itself is a threshold process dependent on flood characteristics (floods greater than 100 cubic meters per second) and antecedent conditions (losing to near neutral gradients). Floods also appear to influence where arid and semi

  19. Double photoionization of helium near threshold

    Levin, J.C.; Armen, G.B.; Sellin, I.A.

    1996-01-01

    There has been substantial recent experimental interest in the ratio of double-to-single photoionization of He near threshold following several theoretical observations that earlier measurements appear to overestimate the ratio, perhaps by as much as 25%, in the first several hundred eV above threshold. The authors recent measurements are 10%-15% below these earlier results and more recent results of Doerner et al. and Samson et al. are yet another 10% lower. The authors will compare these measurement with new data, not yet analyzed, and available theory

  20. Color image Segmentation using automatic thresholding techniques

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  1. R&D, Marketing, and the Success of Next-Generation Products

    Elie Ofek; Miklos Sarvary

    2003-01-01

    This paper studies dynamic competition in markets characterized by the introduction of technologically advanced next-generation products. Firms invest in new product effort in an attempt to attain industry leadership, thus securing high profits and benefiting from advantages relevant for the success of future product generations. The analysis reveals that when the current leader possesses higher research and development (R&D) competence, it tends to investin R&D than rivals and to retain its ...

  2. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny

    Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing pla...

  3. Next-generation batteries and fuel cells for commercial, military, and space applications

    Jha, A R

    2012-01-01

    Distilling complex theoretical physical concepts into an understandable technical framework, Next-Generation Batteries and Fuel Cells for Commercial, Military, and Space Applications describes primary and secondary (rechargeable) batteries for various commercial, military, spacecraft, and satellite applications for covert communications, surveillance, and reconnaissance missions. It emphasizes the cost, reliability, longevity, and safety of the next generation of high-capacity batteries for applications where high energy density, minimum weight and size, and reliability in harsh conditions are

  4. Leading the Development of Concepts of Operations for Next-Generation Remotely Piloted Aircraft

    2016-01-01

    danced around the next-generation RPA CONOPS through technology demonstration for several years. Individual programs have developed key enabling...party system to take command of the aircraft and sensor payload. Aircraft equipped with Link 16 have the option of slaving their sensor payloads to...collection, shift transmission to theater nodes, and continue to slave the payloads to cues given by joint partners in-theater. Embracing Leadership in

  5. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    Papior, Nick Rübner; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-01-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT–NEGF code handles devices with one or multiple electrodes (Ne≥1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable m...

  6. Screening for SNPs with Allele-Specific Methylation based on Next-Generation Sequencing Data

    Hu, Bo; Ji, Yuan; Xu, Yaomin; Ting, Angela H

    2013-01-01

    Allele-specific methylation (ASM) has long been studied but mainly documented in the context of genomic imprinting and X chromosome inactivation. Taking advantage of the next-generation sequencing technology, we conduct a high-throughput sequencing experiment with four prostate cell lines to survey the whole genome and identify single nucleotide polymorphisms (SNPs) with ASM. A Bayesian approach is proposed to model the counts of short reads for each SNP conditional on its genotypes of multip...

  7. Next-Generation Sequencing for Typing and Detection of ESBL and MBL E. coli causing UTI

    Nabakishore Nayak; Mahesh Chanda Sahu

    2017-01-01

    Next-generation sequencing (NGS) has the potential to provide typing results and detect resistance genes in a single assay, thus guiding timely treatment decisions and allowing rapid tracking of transmission of resistant clones. We can be evaluated the performance of a new NGS assay during an outbreak of sequence type 131 (ST131) Escherichia coli infections in a teaching hospital. The assay will be performed on 100 extended-spectrum- beta-lactamase (ESBL) E. coli isolates collected from UTI d...

  8. Next-generation Sequencing-based genomic profiling: Fostering innovation in cancer care?

    Gustavo S. Fernandes

    Full Text Available OBJECTIVES: With the development of next-generation sequencing (NGS technologies, DNA sequencing has been increasingly utilized in clinical practice. Our goal was to investigate the impact of genomic evaluation on treatment decisions for heavily pretreated patients with metastatic cancer. METHODS: We analyzed metastatic cancer patients from a single institution whose cancers had progressed after all available standard-of-care therapies and whose tumors underwent next-generation sequencing analysis. We determined the percentage of patients who received any therapy directed by the test, and its efficacy. RESULTS: From July 2013 to December 2015, 185 consecutive patients were tested using a commercially available next-generation sequencing-based test, and 157 patients were eligible. Sixty-six patients (42.0% were female, and 91 (58.0% were male. The mean age at diagnosis was 52.2 years, and the mean number of pre-test lines of systemic treatment was 2.7. One hundred and seventy-seven patients (95.6% had at least one identified gene alteration. Twenty-four patients (15.2% underwent systemic treatment directed by the test result. Of these, one patient had a complete response, four (16.7% had partial responses, two (8.3% had stable disease, and 17 (70.8% had disease progression as the best result. The median progression-free survival time with matched therapy was 1.6 months, and the median overall survival was 10 months. CONCLUSION: We identified a high prevalence of gene alterations using an next-generation sequencing test. Although some benefit was associated with the matched therapy, most of the patients had disease progression as the best response, indicating the limited biological potential and unclear clinical relevance of this practice.

  9. Nucleic acid reactivity : challenges for next-generation semiempirical quantum models

    Huang, Ming; Giese, Timothy J.; York, Darrin M.

    2015-01-01

    Semiempirical quantum models are routinely used to study mechanisms of RNA catalysis and phosphoryl transfer reactions using combined quantum mechanical/molecular mechanical methods. Herein, we provide a broad assessment of the performance of existing semiempirical quantum models to describe nucleic acid structure and reactivity in order to quantify their limitations and guide the development of next-generation quantum models with improved accuracy. Neglect of diatomic diffierential overlap (...

  10. Multibunch beam breakup in high energy linear colliders

    Thompson, K.A.; Ruth, R.D.

    1989-03-01

    The SLAC design for a next-generation linear collider with center-of-mass energy of 0.5 to 1.0 TeV requires that multiple bunches (/approximately/10) be accelerated on each rf fill. At the beam intensity (/approximately/10 10 particles per bunch) and rf frequency (11--17 GHz) required, the beam would be highly unstable transversely. Using computer simulation and analytic models, we have studied several possible methods of controlling the transverse instability: using damped cavities to damp the transverse dipole modes; adjusting the frequency of the dominant transverse mode relative to the rf frequency, so that bunches are placed near zero crossings of the wake; introducing a cell-to-cell spread in the transverse dipole mode frequencies; and introducing a bunch-to-bunch variation in the transverse focusing. The best cure(s) to use depend on the bunch spacing, intensity, and other features of the final design. 8 refs., 3 figs

  11. Linear mass reflectron

    Mamyrin, B.A.; Shmikk, D.V.

    1979-01-01

    A description and operating principle of a linear mass reflectron with V-form trajectory of ion motion -a new non-magnetic time-of-flight mass spectrometer with high resolution are presented. The ion-optical system of the device consists of an ion source with ionization by electron shock, of accelerating gaps, reflector gaps, a drift space and ion detector. Ions move in the linear mass refraction along the trajectories parallel to the axis of the analyzer chamber. The results of investigations into the experimental device are given. With an ion drift length of 0.6 m the device resolution is 1200 with respect to the peak width at half-height. Small-sized mass spectrometric transducers with high resolution and sensitivity may be designed on the base of the linear mass reflectron principle

  12. Applied linear algebra

    Olver, Peter J

    2018-01-01

    This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...

  13. Theory of linear operations

    Banach, S

    1987-01-01

    This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.

  14. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  15. Linear programming using Matlab

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  16. Linear Colliders TESLA

    Anon.

    1994-01-01

    The aim of the TESLA (TeV Superconducting Linear Accelerator) collaboration (at present 19 institutions from seven countries) is to establish the technology for a high energy electron-positron linear collider using superconducting radiofrequency cavities to accelerate its beams. Another basic goal is to demonstrate that such a collider can meet its performance goals in a cost effective manner. For this the TESLA collaboration is preparing a 500 MeV superconducting linear test accelerator at the DESY Laboratory in Hamburg. This TTF (TESLA Test Facility) consists of four cryomodules, each approximately 12 m long and containing eight 9-cell solid niobium cavities operating at a frequency of 1.3 GHz

  17. Challenges and opportunities in estimating viral genetic diversity from next-generation sequencing data

    Niko eBeerenwinkel

    2012-09-01

    Full Text Available Many viruses, including the clinically relevant RNA viruses HIV and HCV, exist in large populations and display high genetic heterogeneity within and between infected hosts. Assessing intra-patient viral genetic diversity is essential for understanding the evolutionary dynamics of viruses, for designing effective vaccines, and for the success of antiviral therapy. Next-generation sequencing technologies allow the rapid and cost-effective acquisition of thousands to millions of short DNA sequences from a single sample. However, this approach entails several challenges in experimental design and computational data analysis. Here, we review the entire process of inferring viral diversity from sample collection to computing measures of genetic diversity. We discuss sample preparation, including reverse transcription and amplification, and the effect of experimental conditions on diversity estimates due to in vitro base substitutions, insertions, deletions, and recombination. The use of different next-generation sequencing platforms and their sequencing error profiles are compared in the context of various applications of diversity estimation, ranging from the detection of single nucleotide variants to the reconstruction of whole-genome haplotypes. We describe the statistical and computational challenges arising from these technical artifacts, and we review existing approaches, including available software, for their solution. Finally, we discuss open problems, and highlight successful biomedical applications and potential future clinical use of next-generation sequencing to estimate viral diversity.

  18. Next-generation mammalian genetics toward organism-level systems biology.

    Susaki, Etsuo A; Ukai, Hideki; Ueda, Hiroki R

    2017-01-01

    Organism-level systems biology in mammals aims to identify, analyze, control, and design molecular and cellular networks executing various biological functions in mammals. In particular, system-level identification and analysis of molecular and cellular networks can be accelerated by next-generation mammalian genetics. Mammalian genetics without crossing, where all production and phenotyping studies of genome-edited animals are completed within a single generation drastically reduce the time, space, and effort of conducting the systems research. Next-generation mammalian genetics is based on recent technological advancements in genome editing and developmental engineering. The process begins with introduction of double-strand breaks into genomic DNA by using site-specific endonucleases, which results in highly efficient genome editing in mammalian zygotes or embryonic stem cells. By using nuclease-mediated genome editing in zygotes, or ~100% embryonic stem cell-derived mouse technology, whole-body knock-out and knock-in mice can be produced within a single generation. These emerging technologies allow us to produce multiple knock-out or knock-in strains in high-throughput manner. In this review, we discuss the basic concepts and related technologies as well as current challenges and future opportunities for next-generation mammalian genetics in organism-level systems biology.

  19. Next-generation fiber lasers enabled by high-performance components

    Kliner, D. A. V.; Victor, B.; Rivera, C.; Fanning, G.; Balsley, D.; Farrow, R. L.; Kennedy, K.; Hampton, S.; Hawke, R.; Soukup, E.; Reynolds, M.; Hodges, A.; Emery, J.; Brown, A.; Almonte, K.; Nelson, M.; Foley, B.; Dawson, D.; Hemenway, D. M.; Urbanek, W.; DeVito, M.; Bao, L.; Koponen, J.; Gross, K.

    2018-02-01

    Next-generation industrial fiber lasers enable challenging applications that cannot be addressed with legacy fiber lasers. Key features of next-generation fiber lasers include robust back-reflection protection, high power stability, wide power tunability, high-speed modulation and waveform generation, and facile field serviceability. These capabilities are enabled by high-performance components, particularly pump diodes and optical fibers, and by advanced fiber laser designs. We summarize the performance and reliability of nLIGHT diodes, fibers, and next-generation industrial fiber lasers at power levels of 500 W - 8 kW. We show back-reflection studies with up to 1 kW of back-reflected power, power-stability measurements in cw and modulated operation exhibiting sub-1% stability over a 5 - 100% power range, and high-speed modulation (100 kHz) and waveform generation with a bandwidth 20x higher than standard fiber lasers. We show results from representative applications, including cutting and welding of highly reflective metals (Cu and Al) for production of Li-ion battery modules and processing of carbon fiber reinforced polymers.

  20. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  1. Linearly Adjustable International Portfolios

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  2. Linearly Adjustable International Portfolios

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-01-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  3. Linear induction motor

    Barkman, W.E.; Adams, W.Q.; Berrier, B.R.

    1978-01-01

    A linear induction motor has been operated on a test bed with a feedback pulse resolution of 5 nm (0.2 μin). Slewing tests with this slide drive have shown positioning errors less than or equal to 33 nm (1.3 μin) at feedrates between 0 and 25.4 mm/min (0-1 ipm). A 0.86-m (34-in)-stroke linear motor is being investigated, using the SPACO machine as a test bed. Initial results were encouraging, and work is continuing to optimize the servosystem compensation

  4. Handbook of linear algebra

    Hogben, Leslie

    2013-01-01

    With a substantial amount of new material, the Handbook of Linear Algebra, Second Edition provides comprehensive coverage of linear algebra concepts, applications, and computational software packages in an easy-to-use format. It guides you from the very elementary aspects of the subject to the frontiers of current research. Along with revisions and updates throughout, the second edition of this bestseller includes 20 new chapters.New to the Second EditionSeparate chapters on Schur complements, additional types of canonical forms, tensors, matrix polynomials, matrix equations, special types of

  5. Linear Algebra Thoroughly Explained

    Vujičić, Milan

    2008-01-01

    Linear Algebra Thoroughly Explained provides a comprehensive introduction to the subject suitable for adoption as a self-contained text for courses at undergraduate and postgraduate level. The clear and comprehensive presentation of the basic theory is illustrated throughout with an abundance of worked examples. The book is written for teachers and students of linear algebra at all levels and across mathematics and the applied sciences, particularly physics and engineering. It will also be an invaluable addition to research libraries as a comprehensive resource book for the subject.

  6. High performance Si nanowire field-effect-transistors based on a CMOS inverter with tunable threshold voltage.

    Van, Ngoc Huynh; Lee, Jae-Hyun; Sohn, Jung Inn; Cha, Seung Nam; Whang, Dongmok; Kim, Jong Min; Kang, Dae Joon

    2014-05-21

    We successfully fabricated nanowire-based complementary metal-oxide semiconductor (NWCMOS) inverter devices by utilizing n- and p-type Si nanowire field-effect-transistors (NWFETs) via a low-temperature fabrication processing technique. We demonstrate that NWCMOS inverter devices can be operated at less than 1 V, a significantly lower voltage than that of typical thin-film based complementary metal-oxide semiconductor (CMOS) inverter devices. This low-voltage operation was accomplished by controlling the threshold voltage of the n-type Si NWFETs through effective management of the nanowire (NW) doping concentration, while realizing high voltage gain (>10) and ultra-low static power dissipation (≤3 pW) for high-performance digital inverter devices. This result offers a viable means of fabricating high-performance, low-operation voltage, and high-density digital logic circuits using a low-temperature fabrication processing technique suitable for next-generation flexible electronics.

  7. Heritability estimates derived from threshold analyses for ...

    Unknown

    reproductive traits in a composite multibreed beef cattle herd using a threshold model. A GFCAT set of ..... pressure for longevity include low heritabilities, the increased generation interval necessary to obtain survival information, and automatic selection because long-lived cows contribute more offspring to subsequent ...

  8. Regression Discontinuity Designs Based on Population Thresholds

    Eggers, Andrew C.; Freier, Ronny; Grembi, Veronica

    In many countries, important features of municipal government (such as the electoral system, mayors' salaries, and the number of councillors) depend on whether the municipality is above or below arbitrary population thresholds. Several papers have used a regression discontinuity design (RDD...

  9. Thresholding methods for PET imaging: A review

    Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.

    2010-01-01

    This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)

  10. Identification of Threshold Concepts for Biochemistry

    Loertscher, Jennifer; Green, David; Lewis, Jennifer E.; Lin, Sara; Minderhout, Vicky

    2014-01-01

    Threshold concepts (TCs) are concepts that, when mastered, represent a transformed understanding of a discipline without which the learner cannot progress. We have undertaken a process involving more than 75 faculty members and 50 undergraduate students to identify a working list of TCs for biochemistry. The process of identifying TCs for…

  11. The Resting Motor Threshold - Restless or Resting?

    Karabanov, Anke Ninija; Raffin, Estelle Emeline; Siebner, Hartwig Roman

    2015-01-01

    , the RMT of the right first dorsal interosseus muscle was repeatedly determined using a threshold-hunting procedure while participants performed motor imagery and visual attention tasks with the right or left hand. Data were analyzed using repeated-measure ANOVA. Results RMT differed depending on which...

  12. The gradual nature of threshold switching

    Wimmer, M; Salinga, M

    2014-01-01

    The recent commercialization of electronic memories based on phase change materials proved the usability of this peculiar family of materials for application purposes. More advanced data storage and computing concepts, however, demand a deeper understanding especially of the electrical properties of the amorphous phase and the switching behaviour. In this work, we investigate the temporal evolution of the current through the amorphous state of the prototypical phase change material, Ge 2 Sb 2 Te 5 , under constant voltage. A custom-made electrical tester allows the measurement of delay times over five orders of magnitude, as well as the transient states of electrical excitation prior to the actual threshold switching. We recognize a continuous current increase over time prior to the actual threshold-switching event to be a good measure for the electrical excitation. A clear correlation between a significant rise in pre-switching-current and the later occurrence of threshold switching can be observed. This way, we found experimental evidence for the existence of an absolute minimum for the threshold voltage (or electric field respectively) holding also for time scales far beyond the measurement range. (paper)

  13. Multiparty Computation from Threshold Homomorphic Encryption

    Cramer, Ronald; Damgård, Ivan Bjerre; Nielsen, Jesper Buus

    2001-01-01

    We introduce a new approach to multiparty computation (MPC) basing it on homomorphic threshold crypto-systems. We show that given keys for any sufficiently efficient system of this type, general MPC protocols for n parties can be devised which are secure against an active adversary that corrupts...

  14. Classification error of the thresholded independence rule

    Bak, Britta Anker; Fenger-Grøn, Morten; Jensen, Jens Ledet

    We consider classification in the situation of two groups with normally distributed data in the ‘large p small n’ framework. To counterbalance the high number of variables we consider the thresholded independence rule. An upper bound on the classification error is established which is taylored...

  15. Intraoperative transfusion threshold and tissue oxygenation

    Nielsen, K; Dahl, B; Johansson, P I

    2012-01-01

    Transfusion with allogeneic red blood cells (RBCs) may be needed to maintain oxygen delivery during major surgery, but the appropriate haemoglobin (Hb) concentration threshold has not been well established. We hypothesised that a higher level of Hb would be associated with improved subcutaneous...... oxygen tension during major spinal surgery....

  16. Handwriting Automaticity: The Search for Performance Thresholds

    Medwell, Jane; Wray, David

    2014-01-01

    Evidence is accumulating that handwriting has an important role in written composition. In particular, handwriting automaticity appears to relate to success in composition. This relationship has been little explored in British contexts and we currently have little idea of what threshold performance levels might be. In this paper, we report on two…

  17. Grid - a fast threshold tracking procedure

    Fereczkowski, Michal; Dau, Torsten; MacDonald, Ewen

    2016-01-01

    A new procedure, called “grid”, is evaluated that allows rapid acquisition of threshold curves for psychophysics and, in particular, psychoacoustic, experiments. In this method, the parameterresponse space is sampled in two dimensions within a single run. This allows the procedure to focus more e...

  18. 49 CFR 80.13 - Threshold criteria.

    2010-10-01

    ... exceed $30 million); (4) Project financing shall be repayable, in whole or in part, from tolls, user fees... Transportation Office of the Secretary of Transportation CREDIT ASSISTANCE FOR SURFACE TRANSPORTATION PROJECTS... project shall meet the following five threshold criteria: (1) The project shall be consistent with the...

  19. Low-threshold conical microcavity dye lasers

    Grossmann, Tobias; Schleede, Simone; Hauser, Mario

    2010-01-01

    element simulations confirm that lasing occurs in whispering gallery modes which corresponds well to the measured multimode laser-emission. The effect of dye concentration on lasing threshold and lasing wavelength is investigated and can be explained using a standard dye laser model....

  20. Microplastic effect thresholds for freshwater benthic macroinvertebrates

    Redondo Hasselerharm, P.E.; Dede Falahudin, Dede; Peeters, E.T.H.M.; Koelmans, A.A.

    2018-01-01

    Now that microplastics have been detected in lakes, rivers and estuaries all over the globe, evaluating their effects on biota has become an urgent research priority. This is the first study that aims at determining the effect thresholds for a battery of six freshwater benthic macroinvertebrates

  1. Threshold Concepts in Finance: Conceptualizing the Curriculum

    Hoadley, Susan; Tickle, Leonie; Wood, Leigh N.; Kyng, Tim

    2015-01-01

    Graduates with well-developed capabilities in finance are invaluable to our society and in increasing demand. Universities face the challenge of designing finance programmes to develop these capabilities and the essential knowledge that underpins them. Our research responds to this challenge by identifying threshold concepts that are central to…

  2. Distribution of sensory taste thresholds for phenylthiocarbamide ...

    The ability to taste Phenylthiocarbamide (PTC), a bitter organic compound has been described as a bimodal autosomal trait in both genetic and anthropological studies. This study is based on the ability of a person to taste PTC. The present study reports the threshold distribution of PTC taste sensitivity among some Muslim ...

  3. The acoustic reflex threshold in aging ears.

    Silverman, C A; Silman, S; Miller, M H

    1983-01-01

    This study investigates the controversy regarding the influence of age on the acoustic reflex threshold for broadband noise, 500-, 1000-, 2000-, and 4000-Hz activators between Jerger et al. [Mono. Contemp. Audiol. 1 (1978)] and Jerger [J. Acoust. Soc. Am. 66 (1979)] on the one hand and Silman [J. Acoust. Soc. Am. 66 (1979)] and others on the other. The acoustic reflex thresholds for broadband noise, 500-, 1000-, 2000-, and 4000-Hz activators were evaluated under two measurement conditions. Seventy-two normal-hearing ears were drawn from 72 subjects ranging in age from 20-69 years. The results revealed that age was correlated with the acoustic reflex threshold for BBN activator but not for any of the tonal activators; the correlation was stronger under the 1-dB than under the 5-dB measurement condition. Also, the mean acoustic reflex thresholds for broadband noise activator were essentially similar to those reported by Jerger et al. (1978) but differed from those obtained in this study under the 1-dB measurement condition.

  4. Atherogenic Risk Factors and Hearing Thresholds

    Frederiksen, Thomas Winther; Ramlau-Hansen, Cecilia Høst; Stokholm, Zara Ann

    2014-01-01

    The objective of this study was to evaluate the influence of atherogenic risk factors on hearing thresholds. In a cross-sectional study we analyzed data from a Danish survey in 2009-2010 on physical and psychological working conditions. The study included 576 white- and blue-collar workers from c...

  5. Near threshold behavior of photoelectron satellite intensities

    Shirley, D.A.; Becker, U.; Heimann, P.A.; Langer, B.

    1987-09-01

    The historical background and understanding of photoelectron satellite peaks is reviewed, using He(n), Ne(1s), Ne(2p), Ar(1s), and Ar(3s) as case studies. Threshold studies are emphasized. The classification of electron correlation effects as either ''intrinsic'' or ''dynamic'' is recommended. 30 refs., 7 figs

  6. America, Linearly Cyclical

    2013-05-10

    AND VICTIM- ~ vAP BLAMING 4. AMERICA, LINEARLY CYCUCAL AF IMT 1768, 19840901, V5 PREVIOUS EDITION WILL BE USED. C2C Jessica Adams Dr. Brissett...his desires, his failings, and his aspirations follow the same general trend throughout history and throughout cultures. The founding fathers sought

  7. Stanford's linear collider

    Southworth, B.

    1985-01-01

    The peak of the construction phase of the Stanford Linear Collider, SLC, to achieve 50 GeV electron-positron collisions has now been passed. The work remains on schedule to attempt colliding beams, initially at comparatively low luminosity, early in 1987. (orig./HSI).

  8. Dosimetry of linear sources

    Mafra Neto, F.

    1992-01-01

    The dose of gamma radiation from a linear source of cesium 137 is obtained, presenting two difficulties: oblique filtration of radiation when cross the platinum wall, in different directions, and dose connection due to the scattering by the material mean of propagation. (C.G.C.)

  9. Resistors Improve Ramp Linearity

    Kleinberg, L. L.

    1982-01-01

    Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.

  10. LINEAR COLLIDERS: 1992 workshop

    Settles, Ron; Coignet, Guy

    1992-01-01

    As work on designs for future electron-positron linear colliders pushes ahead at major Laboratories throughout the world in a major international collaboration framework, the LC92 workshop held in Garmisch Partenkirchen this summer, attended by 200 machine and particle physicists, provided a timely focus

  11. Linear genetic programming

    Brameier, Markus

    2007-01-01

    Presents a variant of Genetic Programming that evolves imperative computer programs as linear sequences of instructions, in contrast to the more traditional functional expressions or syntax trees. This book serves as a reference for researchers, but also contains sufficient introduction for students and those who are new to the field

  12. On Solving Linear Recurrences

    Dobbs, David E.

    2013-01-01

    A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.

  13. Review of linear colliders

    Takeda, Seishi

    1992-01-01

    The status of R and D of future e + e - linear colliders proposed by the institutions throughout the world is described including the JLC, NLC, VLEPP, CLIC, DESY/THD and TESLA projects. The parameters and RF sources are discussed. (G.P.) 36 refs.; 1 tab

  14. Cost–effectiveness thresholds: pros and cons

    Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-01-01

    Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285

  15. Multimodal distribution of human cold pain thresholds.

    Lötsch, Jörn; Dimova, Violeta; Lieb, Isabel; Zimmermann, Michael; Oertel, Bruno G; Ultsch, Alfred

    2015-01-01

    It is assumed that different pain phenotypes are based on varying molecular pathomechanisms. Distinct ion channels seem to be associated with the perception of cold pain, in particular TRPM8 and TRPA1 have been highlighted previously. The present study analyzed the distribution of cold pain thresholds with focus at describing the multimodality based on the hypothesis that it reflects a contribution of distinct ion channels. Cold pain thresholds (CPT) were available from 329 healthy volunteers (aged 18 - 37 years; 159 men) enrolled in previous studies. The distribution of the pooled and log-transformed threshold data was described using a kernel density estimation (Pareto Density Estimation (PDE)) and subsequently, the log data was modeled as a mixture of Gaussian distributions using the expectation maximization (EM) algorithm to optimize the fit. CPTs were clearly multi-modally distributed. Fitting a Gaussian Mixture Model (GMM) to the log-transformed threshold data revealed that the best fit is obtained when applying a three-model distribution pattern. The modes of the identified three Gaussian distributions, retransformed from the log domain to the mean stimulation temperatures at which the subjects had indicated pain thresholds, were obtained at 23.7 °C, 13.2 °C and 1.5 °C for Gaussian #1, #2 and #3, respectively. The localization of the first and second Gaussians was interpreted as reflecting the contribution of two different cold sensors. From the calculated localization of the modes of the first two Gaussians, the hypothesis of an involvement of TRPM8, sensing temperatures from 25 - 24 °C, and TRPA1, sensing cold from 17 °C can be derived. In that case, subjects belonging to either Gaussian would possess a dominance of the one or the other receptor at the skin area where the cold stimuli had been applied. The findings therefore support a suitability of complex analytical approaches to detect mechanistically determined patterns from pain phenotype data.

  16. Identifying thresholds for ecosystem-based management.

    Jameal F Samhouri

    Full Text Available BACKGROUND: One of the greatest obstacles to moving ecosystem-based management (EBM from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. METHODOLOGY/PRINCIPAL FINDINGS: To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity and functional (e.g., resilience attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1 fishing and (2 nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. CONCLUSIONS/SIGNIFICANCE: For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management.

  17. Do multiple body modifications alter pain threshold?

    Yamamotová, A; Hrabák, P; Hříbek, P; Rokyta, R

    2017-12-30

    In recent years, epidemiological data has shown an increasing number of young people who deliberately self-injure. There have also been parallel increases in the number of people with tattoos and those who voluntarily undergo painful procedures associated with piercing, scarification, and tattooing. People with self-injury behaviors often say that they do not feel the pain. However, there is no information regarding pain perception in those that visit tattoo parlors and piercing studios compared to those who don't. The aim of this study was to compare nociceptive sensitivity in four groups of subjects (n=105, mean age 26 years, 48 women and 57 men) with different motivations to experience pain (i.e., with and without multiple body modifications) in two different situations; (1) in controlled, emotionally neutral conditions, and (2) at a "Hell Party" (HP), an event organized by a piercing and tattoo parlor, with a main event featuring a public demonstration of painful techniques (burn scars, hanging on hooks, etc.). Pain thresholds of the fingers of the hand were measured using a thermal stimulator and mechanical algometer. In HP participants, information about alcohol intake, self-harming behavior, and psychiatric history were used in the analysis as intervening variables. Individuals with body modifications as well as without body modifications had higher thermal pain thresholds at Hell Party, compared to thresholds measured at control neutral conditions. No such differences were found relative to mechanical pain thresholds. Increased pain threshold in all HP participants, irrespectively of body modification, cannot be simply explained by a decrease in the sensory component of pain; instead, we found that the environment significantly influenced the cognitive and affective component of pain.

  18. Electron Cloud Effect in the Linear Colliders

    Pivi, M

    2004-01-01

    Beam induced multipacting, driven by the electric field of successive positively charged bunches, may arise from a resonant motion of electrons, generated by secondary emission, bouncing back and forth between opposite walls of the vacuum chamber. The electron-cloud effect (ECE) has been observed or is expected at many storage rings [1]. In the beam pipe of the Damping Ring (DR) of a linear collider, an electron cloud is produced initially by ionization of the residual gas and photoelectrons from the synchrotron radiation. The cloud is then sustained by secondary electron emission. This electron cloud can reach equilibrium after the passage of only a few bunches. The electron-cloud effect may be responsible for collective effects as fast coupled-bunch and single-bunch instability, emittance blow-up or incoherent tune shift when the bunch current exceeds a certain threshold, accompanied by a large number of electrons in the vacuum chamber. The ECE was identified as one of the most important R and D topics in the International Linear Collider Report [2]. Systematic studies on the possible electron-cloud effect have been initiated at SLAC for the GLC/NLC and TESLA linear colliders, with particular attention to the effect in the positron main damping ring (MDR) and the positron Low Emittance Transport which includes the bunch compressor system (BCS), the main linac, and the beam delivery system (BDS). We present recent computer simulation results for the main features of the electron cloud generation in both machine designs. Thus, single and coupled-bunch instability thresholds are estimated for the GLC/NLC design

  19. Damage threshold of lithium niobate crystal under single and multiple femtosecond laser pulses: theoretical and experimental study

    Meng, Qinglong; Zhang, Bin; Zhong, Sencheng; Zhu, Liguo

    2016-01-01

    The damage threshold of lithium niobate crystal under single and multiple femtosecond laser pulses has been studied theoretically and experimentally. Firstly, the model for the damage threshold prediction of crystal materials based on the improved rate equation has been proposed. Then, the experimental measure method of the damage threshold of crystal materials has been given in detail. On the basis, the variation of the damage threshold of lithium niobate crystal with the pulse duration has also been analyzed quantitatively. Finally, the damage threshold of lithium niobate crystal under multiple laser pulses has been measured and compared to the theoretical results. The results show that the transmittance of lithium niobate crystal is almost a constant when the laser pulse fluence is relative low, whereas it decreases linearly with the increase in the laser pulse fluence below the damage threshold. The damage threshold of lithium niobate crystal increases with the increase in the duration of the femtosecond laser pulse. And the damage threshold of lithium niobate crystal under multiple laser pulses is obviously lower than that irradiated by a single laser pulse. The theoretical data fall in good agreement with the experimental results. (orig.)

  20. Validation and evaluation of epistemic uncertainty in rainfall thresholds for regional scale landslide forecasting

    Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto

    2015-04-01

    Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This

  1. Multipole surface solitons supported by the interface between linear media and nonlocal nonlinear media

    Shi, Zhiwei; Li, Huagang; Guo, Qi

    2012-01-01

    We address multipole surface solitons occurring at the interface between a linear medium and a nonlocal nonlinear medium. We show the impact of nonlocality, the propagation constant, and the linear index difference of two media on the properties of the surface solitons. We find that there exist a threshold value of the degree of the nonlocality at the same linear index difference of two media, only when the degree of the nonlocality goes beyond the value, the multipole surface solitons can be stable. -- Highlights: ► We show the impact of nonlocality and the linear index difference of two media on the properties of the surface solitons. ► For the surface solitons, only when the degree of the nonlocality goes beyond a threshold value, they can be stable. ► The number of poles and the index difference of two media can all influence the threshold value.

  2. Next-generation phage display: integrating and comparing available molecular tools to enable cost-effective high-throughput analysis.

    Emmanuel Dias-Neto

    2009-12-01

    Full Text Available Combinatorial phage display has been used in the last 20 years in the identification of protein-ligands and protein-protein interactions, uncovering relevant molecular recognition events. Rate-limiting steps of combinatorial phage display library selection are (i the counting of transducing units and (ii the sequencing of the encoded displayed ligands. Here, we adapted emerging genomic technologies to minimize such challenges.We gained efficiency by applying in tandem real-time PCR for rapid quantification to enable bacteria-free phage display library screening, and added phage DNA next-generation sequencing for large-scale ligand analysis, reporting a fully integrated set of high-throughput quantitative and analytical tools. The approach is far less labor-intensive and allows rigorous quantification; for medical applications, including selections in patients, it also represents an advance for quantitative distribution analysis and ligand identification of hundreds of thousands of targeted particles from patient-derived biopsy or autopsy in a longer timeframe post library administration. Additional advantages over current methods include increased sensitivity, less variability, enhanced linearity, scalability, and accuracy at much lower cost. Sequences obtained by qPhage plus pyrosequencing were similar to a dataset produced from conventional Sanger-sequenced transducing-units (TU, with no biases due to GC content, codon usage, and amino acid or peptide frequency. These tools allow phage display selection and ligand analysis at >1,000-fold faster rate, and reduce costs approximately 250-fold for generating 10(6 ligand sequences.Our analyses demonstrates that whereas this approach correlates with the traditional colony-counting, it is also capable of a much larger sampling, allowing a faster, less expensive, more accurate and consistent analysis of phage enrichment. Overall, qPhage plus pyrosequencing is superior to TU-counting plus Sanger

  3. Finite-dimensional linear algebra

    Gockenbach, Mark S

    2010-01-01

    Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq

  4. Threshold stoichiometry for beam induced nitrogen depletion of SiN

    Timmers, H.; Weijers, T.D.M.; Elliman, R.G.; Uribasterra, J.; Whitlow, H.J.; Sarwe, E.-L.

    2002-01-01

    Measurements of the stoichiometry of silicon nitride films as a function of the number of incident ions using heavy ion elastic recoil detection (ERD) show that beam-induced nitrogen depletion depends on the projectile species, the beam energy, and the initial stoichiometry. A threshold stoichiometry exists in the range 1.3>N/Si≥1, below which the films are stable against nitrogen depletion. Above this threshold, depletion is essentially linear with incident fluence. The depletion rate correlates non-linearly with the electronic energy loss of the projectile ion in the film. Sufficiently long exposure of nitrogen-rich films renders the mechanism, which prevents depletion of nitrogen-poor films, ineffective. Compromising depth-resolution, nitrogen depletion from SiN films during ERD analysis can be reduced significantly by using projectile beams with low atomic numbers

  5. Measuring Input Thresholds on an Existing Board

    Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.

    2011-01-01

    A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and

  6. Linearity and Non-linearity of Photorefractive effect in Materials ...

    In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...

  7. Linearly Refined Session Types

    Pedro Baltazar

    2012-11-01

    Full Text Available Session types capture precise protocol structure in concurrent programming, but do not specify properties of the exchanged values beyond their basic type. Refinement types are a form of dependent types that can address this limitation, combining types with logical formulae that may refer to program values and can constrain types using arbitrary predicates. We present a pi calculus with assume and assert operations, typed using a session discipline that incorporates refinement formulae written in a fragment of Multiplicative Linear Logic. Our original combination of session and refinement types, together with the well established benefits of linearity, allows very fine-grained specifications of communication protocols in which refinement formulae are treated as logical resources rather than persistent truths.

  8. Linear Water Waves

    Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

    2002-08-01

    This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

  9. The International Linear Collider

    List Benno

    2014-04-01

    Full Text Available The International Linear Collider (ILC is a proposed e+e− linear collider with a centre-of-mass energy of 200–500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  10. The International Linear Collider

    List, Benno

    2014-04-01

    The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  11. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  12. Reciprocating linear motor

    Goldowsky, Michael P. (Inventor)

    1987-01-01

    A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.

  13. Duality in linearized gravity

    Henneaux, Marc; Teitelboim, Claudio

    2005-01-01

    We show that duality transformations of linearized gravity in four dimensions, i.e., rotations of the linearized Riemann tensor and its dual into each other, can be extended to the dynamical fields of the theory so as to be symmetries of the action and not just symmetries of the equations of motion. Our approach relies on the introduction of two superpotentials, one for the spatial components of the spin-2 field and the other for their canonically conjugate momenta. These superpotentials are two-index, symmetric tensors. They can be taken to be the basic dynamical fields and appear locally in the action. They are simply rotated into each other under duality. In terms of the superpotentials, the canonical generator of duality rotations is found to have a Chern-Simons-like structure, as in the Maxwell case

  14. The SLAC linear collider

    Phinney, N.

    1992-01-01

    The SLAC Linear Collider has begun a new era of operation with the SLD detector. During 1991 there was a first engineering run for the SLD in parallel with machine improvements to increase luminosity and reliability. For the 1992 run, a polarized electron source was added and more than 10,000 Zs with an average of 23% polarization have been logged by the SLD. This paper discusses the performance of the SLC in 1991 and 1992 and the technical advances that have produced higher luminosity. Emphasis will be placed on issues relevant to future linear colliders such as producing and maintaining high current, low emittance beams and focusing the beams to the micron scale for collisions. (Author) tab., 2 figs., 18 refs

  15. Linear waves and instabilities

    Bers, A.

    1975-01-01

    The electrodynamic equations for small-amplitude waves and their dispersion relation in a homogeneous plasma are outlined. For such waves, energy and momentum, and their flow and transformation, are described. Perturbation theory of waves is treated and applied to linear coupling of waves, and the resulting instabilities from such interactions between active and passive waves. Linear stability analysis in time and space is described where the time-asymptotic, time-space Green's function for an arbitrary dispersion relation is developed. The perturbation theory of waves is applied to nonlinear coupling, with particular emphasis on pump-driven interactions of waves. Details of the time--space evolution of instabilities due to coupling are given. (U.S.)

  16. Extended linear chain compounds

    Linear chain substances span a large cross section of contemporary chemistry ranging from covalent polymers, to organic charge transfer com­ plexes to nonstoichiometric transition metal coordination complexes. Their commonality, which coalesced intense interest in the theoretical and exper­ imental solid state physics/chemistry communities, was based on the obser­ vation that these inorganic and organic polymeric substrates exhibit striking metal-like electrical and optical properties. Exploitation and extension of these systems has led to the systematic study of both the chemistry and physics of highly and poorly conducting linear chain substances. To gain a salient understanding of these complex materials rich in anomalous aniso­ tropic electrical, optical, magnetic, and mechanical properties, the conver­ gence of diverse skills and talents was required. The constructive blending of traditionally segregated disciplines such as synthetic and physical organic, inorganic, and polymer chemistry, crystallog...

  17. Non-linear osmosis

    Diamond, Jared M.

    1966-01-01

    1. The relation between osmotic gradient and rate of osmotic water flow has been measured in rabbit gall-bladder by a gravimetric procedure and by a rapid method based on streaming potentials. Streaming potentials were directly proportional to gravimetrically measured water fluxes. 2. As in many other tissues, water flow was found to vary with gradient in a markedly non-linear fashion. There was no consistent relation between the water permeability and either the direction or the rate of water flow. 3. Water flow in response to a given gradient decreased at higher osmolarities. The resistance to water flow increased linearly with osmolarity over the range 186-825 m-osM. 4. The resistance to water flow was the same when the gall-bladder separated any two bathing solutions with the same average osmolarity, regardless of the magnitude of the gradient. In other words, the rate of water flow is given by the expression (Om — Os)/[Ro′ + ½k′ (Om + Os)], where Ro′ and k′ are constants and Om and Os are the bathing solution osmolarities. 5. Of the theories advanced to explain non-linear osmosis in other tissues, flow-induced membrane deformations, unstirred layers, asymmetrical series-membrane effects, and non-osmotic effects of solutes could not explain the results. However, experimental measurements of water permeability as a function of osmolarity permitted quantitative reconstruction of the observed water flow—osmotic gradient curves. Hence non-linear osmosis in rabbit gall-bladder is due to a decrease in water permeability with increasing osmolarity. 6. The results suggest that aqueous channels in the cell membrane behave as osmometers, shrinking in concentrated solutions of impermeant molecules and thereby increasing membrane resistance to water flow. A mathematical formulation of such a membrane structure is offered. PMID:5945254

  18. Fundamentals of linear algebra

    Dash, Rajani Ballav

    2008-01-01

    FUNDAMENTALS OF LINEAR ALGEBRA is a comprehensive Text Book, which can be used by students and teachers of All Indian Universities. The Text has easy, understandable form and covers all topics of UGC Curriculum. There are lots of worked out examples which helps the students in solving the problems without anybody's help. The Problem sets have been designed keeping in view of the questions asked in different examinations.

  19. Linear network theory

    Sander, K F

    1964-01-01

    Linear Network Theory covers the significant algebraic aspect of network theory, with minimal reference to practical circuits. The book begins the presentation of network analysis with the exposition of networks containing resistances only, and follows it up with a discussion of networks involving inductance and capacity by way of the differential equations. Classification and description of certain networks, equivalent networks, filter circuits, and network functions are also covered. Electrical engineers, technicians, electronics engineers, electricians, and students learning the intricacies

  20. Non linear viscoelastic models

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....