WorldWideScience

Sample records for cumulative sum cusum

  1. Signal anomaly detection using modified CUSUM [cumulative sum] method

    International Nuclear Information System (INIS)

    Morgenstern, V.; Upadhyaya, B.R.; Benedetti, M.

    1988-01-01

    An important aspect of detection of anomalies in signals is the identification of changes in signal behavior caused by noise, jumps, changes in band-width, sudden pulses and signal bias. A methodology is developed to identify, isolate and characterize these anomalies using a modification of the cumulative sum (CUSUM) approach. The new algorithm performs anomaly detection at three levels and is implemented on a general purpose computer. 7 refs., 4 figs

  2. [Analgesic quality in a postoperative pain service: continuous assessment with the cumulative sum (cusum) method].

    Science.gov (United States)

    Baptista Macaroff, W M; Castroman Espasandín, P

    2007-01-01

    The aim of this study was to assess the cumulative sum (cusum) method for evaluating the performance of our hospital's acute postoperative pain service. The period of analysis was 7 months. Analgesic failure was defined as a score of 3 points or more on a simple numerical scale. Acceptable failure (p0) was set at 20% of patients upon admission to the postanesthetic recovery unit and at 7% 24 hours after surgery. Unacceptable failure was set at double the p0 rate at each time (40% and 14%, respectively). The unit's patient records were used to generate a cusum graph for each evaluation. Nine hundred four records were included. The rate of failure was 31.6% upon admission to the unit and 12.1% at the 24-hour postoperative assessment. The curve rose rapidly to the value set for p0 at both evaluation times (n = 14 and n = 17, respectively), later leveled off, and began to fall after 721 and 521 cases, respectively. Our study shows the efficacy of the cusum method for monitoring a proposed quality standard. The graph also showed periods of suboptimal performance that would not have been evident from analyzing the data en block. Thus the cusum method would facilitate rapid detection of periods in which quality declines.

  3. 7 CFR 42.132 - Determining cumulative sum values.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Determining cumulative sum values. 42.132 Section 42... Determining cumulative sum values. (a) The parameters for the on-line cumulative sum sampling plans for AQL's... 3 1 2.5 3 1 2 1 (b) At the beginning of the basic inspection period, the CuSum value is set equal to...

  4. Use of the cumulative sum method (CUSUM) to assess the learning curves of ultrasound-guided continuous femoral nerve block.

    Science.gov (United States)

    Kollmann-Camaiora, A; Brogly, N; Alsina, E; Gilsanz, F

    2017-10-01

    Although ultrasound is a basic competence for anaesthesia residents (AR) there is few data available on the learning process. This prospective observational study aims to assess the learning process of ultrasound-guided continuous femoral nerve block and to determine the number of procedures that a resident would need to perform in order to reach proficiency using the cumulative sum (CUSUM) method. We recruited 19 AR without previous experience. Learning curves were constructed using the CUSUM method for ultrasound-guided continuous femoral nerve block considering 2 success criteria: a decrease of pain score>2 in a [0-10] scale after 15minutes, and time required to perform it. We analyse data from 17 AR for a total of 237 ultrasound-guided continuous femoral nerve blocks. 8/17 AR became proficient for pain relief, however all the AR who did more than 12 blocks (8/8) became proficient. As for time of performance 5/17 of AR achieved the objective of 12minutes, however all the AR who did more than 20 blocks (4/4) achieved it. The number of procedures needed to achieve proficiency seems to be 12, however it takes more procedures to reduce performance time. The CUSUM methodology could be useful in training programs to allow early interventions in case of repeated failures, and develop competence-based curriculum. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  5. Using the cumulative sum algorithm against distributed denial of service attacks in Internet of Things

    CSIR Research Space (South Africa)

    Machaka, Pheeha

    2015-11-01

    Full Text Available The paper presents the threats that are present in Internet of Things (IoT) systems and how they can be used to perpetuate a large scale DDoS attack. The paper investigates how the Cumulative Sum (CUSUM) algorithm can be used to detect a DDoS attack...

  6. Estimation of cortical silent period following transcranial magnetic stimulation using a computerised cumulative sum method.

    Science.gov (United States)

    King, Nicolas K K; Kuppuswamy, Annapoorna; Strutton, Paul H; Davey, Nick J

    2006-01-15

    The cortical silent period (CSP) following transcranial magnetic stimulation (TMS) of the motor cortex can be used to measure intra-cortical inhibition and changes in a number of important pathologies affecting the central nervous system. The main drawback of this technique has been the difficulty in accurately identifying the onset and offset of the cortical silent period leading to inter-observer variability. We developed an automated method based on the cumulative sum (Cusum) technique to improve the determination of the duration and area of the cortical silent period. This was compared with experienced raters and two other automated methods. We showed that the automated Cusum method reliably correlated with the experienced raters for both duration and area of CSP. Compared with the automated methods, the Cusum also showed the strongest correlation with the experienced raters. Our results show the Cusum method to be a simple, graphical and powerful method of detecting low-intensity CSP that can be easily automated using standard software.

  7. A risk-adjusted O-E CUSUM with monitoring bands for monitoring medical outcomes.

    Science.gov (United States)

    Sun, Rena Jie; Kalbfleisch, John D

    2013-03-01

    In order to monitor a medical center's survival outcomes using simple plots, we introduce a risk-adjusted Observed-Expected (O-E) Cumulative SUM (CUSUM) along with monitoring bands as decision criterion.The proposed monitoring bands can be used in place of a more traditional but complicated V-shaped mask or the simultaneous use of two one-sided CUSUMs. The resulting plot is designed to simultaneously monitor for failure time outcomes that are "worse than expected" or "better than expected." The slopes of the O-E CUSUM provide direct estimates of the relative risk (as compared to a standard or expected failure rate) for the data being monitored. Appropriate rejection regions are obtained by controlling the false alarm rate (type I error) over a period of given length. Simulation studies are conducted to illustrate the performance of the proposed method. A case study is carried out for 58 liver transplant centers. The use of CUSUM methods for quality improvement is stressed. Copyright © 2013, The International Biometric Society.

  8. Cumulative sum quality control for calibrated breast density measurements

    International Nuclear Information System (INIS)

    Heine, John J.; Cao Ke; Beam, Craig

    2009-01-01

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  9. Cumulative sum quality control for calibrated breast density measurements

    Energy Technology Data Exchange (ETDEWEB)

    Heine, John J.; Cao Ke; Beam, Craig [Cancer Prevention and Control Division, Moffitt Cancer Center, 12902 Magnolia Drive, Tampa, Florida 33612 (United States); Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., Chicago, Illinois 60612 (United States)

    2009-12-15

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  10. Evaluation of the learning curve for external cephalic version using cumulative sum analysis.

    Science.gov (United States)

    Kim, So Yun; Han, Jung Yeol; Chang, Eun Hye; Kwak, Dong Wook; Ahn, Hyun Kyung; Ryu, Hyun Mi; Kim, Moon Young

    2017-07-01

    We evaluated the learning curve for external cephalic version (ECV) using learning curve-cumulative sum (LC-CUSUM) analysis. This was a retrospective study involving 290 consecutive cases between October 2013 and March 2017. We evaluated the learning curve for ECV on nulli and over para 1 group using LC-CUSUM analysis on the assumption that 50% and 70% of ECV procedures succeeded by description a trend-line of quadratic function with reliable R 2 values. The overall success rate for ECV was 64.8% (188/290), while the success rate for nullipara and over para 1 groups was 56.2% (100/178) and 78.6% (88/112), respectively. 'H' value, that the actual failure rate does not differ from the acceptable failure rate, was -3.27 and -1.635 when considering ECV success rates of 50% and 70%, respectively. Consequently, in order to obtain a consistent 50% success rate, we would require 57 nullipara cases, and in order to obtain a consistent 70% success rate, we would require 130 nullipara cases. In contrast, 8 to 10 over para 1 cases would be required for an expected success rate of 50% and 70% on over para 1 group. Even a relatively inexperienced physician can experience success with multipara and after accumulating experience, they will manage nullipara cases. Further research is required for LC-CUSUM involving several practitioners instead of a single practitioner. This will lead to the gradual implementation of standard learning curve guidelines for ECV.

  11. Application of a binomial cusum control chart to monitor one drinking water indicator

    Directory of Open Access Journals (Sweden)

    Elisa Henning

    2014-02-01

    Full Text Available The aim of this study is to analyze the use of a binomial cumulative sum chart (CUSUM to monitor the presence of total coliforms, biological indicators of quality of water supplies in water treatment processes. The sample series were monthly taken from a water treatment plant and were analyzed from 2007 to 2009. The statistical treatment of the data was performed using GNU R, and routines were created for the approximation of the upper limit of the binomial CUSUM chart. Furthermore, a comparative study was conducted to investigate whether there is a significant difference in sensitivity between the use of CUSUM and the traditional Shewhart chart, the most commonly used chart in process monitoring. The results obtained demonstrate that this study was essential for making the right choice in selecting a chart for the statistical analysis of this process.

  12. Improved implementation of the risk-adjusted Bernoulli CUSUM chart to monitor surgical outcome quality.

    Science.gov (United States)

    Keefe, Matthew J; Loda, Justin B; Elhabashy, Ahmad E; Woodall, William H

    2017-06-01

    The traditional implementation of the risk-adjusted Bernoulli cumulative sum (CUSUM) chart for monitoring surgical outcome quality requires waiting a pre-specified period of time after surgery before incorporating patient outcome information. We propose a simple but powerful implementation of the risk-adjusted Bernoulli CUSUM chart that incorporates outcome information as soon as it is available, rather than waiting a pre-specified period of time after surgery. A simulation study is presented that compares the performance of the traditional implementation of the risk-adjusted Bernoulli CUSUM chart to our improved implementation. We show that incorporating patient outcome information as soon as it is available leads to quicker detection of process deterioration. Deterioration of surgical performance could be detected much sooner using our proposed implementation, which could lead to the earlier identification of problems. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  13. Perils of correlating CUSUM-transformed variables to infer ecological relationships (Breton et al. 2006; Glibert 2010)

    Science.gov (United States)

    Cloern, James E.; Jassby, Alan D.; Carstensen, Jacob; Bennett, William A.; Kimmerer, Wim; Mac Nally, Ralph; Schoellhamer, David H.; Winder, Monika

    2012-01-01

    We comment on a nonstandard statistical treatment of time-series data first published by Breton et al. (2006) in Limnology and Oceanography and, more recently, used by Glibert (2010) in Reviews in Fisheries Science. In both papers, the authors make strong inferences about the underlying causes of population variability based on correlations between cumulative sum (CUSUM) transformations of organism abundances and environmental variables. Breton et al. (2006) reported correlations between CUSUM-transformed values of diatom biomass in Belgian coastal waters and the North Atlantic Oscillation, and between meteorological and hydrological variables. Each correlation of CUSUM-transformed variables was judged to be statistically significant. On the basis of these correlations, Breton et al. (2006) developed "the first evidence of synergy between climate and human-induced river-based nitrate inputs with respect to their effects on the magnitude of spring Phaeocystis colony blooms and their dominance over diatoms."

  14. Dynamic probability control limits for risk-adjusted CUSUM charts based on multiresponses.

    Science.gov (United States)

    Zhang, Xiang; Loda, Justin B; Woodall, William H

    2017-07-20

    For a patient who has survived a surgery, there could be several levels of recovery. Thus, it is reasonable to consider more than two outcomes when monitoring surgical outcome quality. The risk-adjusted cumulative sum (CUSUM) chart based on multiresponses has been developed for monitoring a surgical process with three or more outcomes. However, there is a significant effect of varying risk distributions on the in-control performance of the chart when constant control limits are applied. To overcome this disadvantage, we apply the dynamic probability control limits to the risk-adjusted CUSUM charts for multiresponses. The simulation results demonstrate that the in-control performance of the charts with dynamic probability control limits can be controlled for different patient populations because these limits are determined for each specific sequence of patients. Thus, the use of dynamic probability control limits for risk-adjusted CUSUM charts based on multiresponses allows each chart to be designed for the corresponding patient sequence of a surgeon or a hospital and therefore does not require estimating or monitoring the patients' risk distribution. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. A simple signaling rule for variable life-adjusted display derived from an equivalent risk-adjusted CUSUM chart.

    Science.gov (United States)

    Wittenberg, Philipp; Gan, Fah Fatt; Knoth, Sven

    2018-04-17

    The variable life-adjusted display (VLAD) is the first risk-adjusted graphical procedure proposed in the literature for monitoring the performance of a surgeon. It displays the cumulative sum of expected minus observed deaths. It has since become highly popular because the statistic plotted is easy to understand. But it is also easy to misinterpret a surgeon's performance by utilizing the VLAD, potentially leading to grave consequences. The problem of misinterpretation is essentially caused by the variance of the VLAD's statistic that increases with sample size. In order for the VLAD to be truly useful, a simple signaling rule is desperately needed. Various forms of signaling rules have been developed, but they are usually quite complicated. Without signaling rules, making inferences using the VLAD alone is difficult if not misleading. In this paper, we establish an equivalence between a VLAD with V-mask and a risk-adjusted cumulative sum (RA-CUSUM) chart based on the difference between the estimated probability of death and surgical outcome. Average run length analysis based on simulation shows that this particular RA-CUSUM chart has similar performance as compared to the established RA-CUSUM chart based on the log-likelihood ratio statistic obtained by testing the odds ratio of death. We provide a simple design procedure for determining the V-mask parameters based on a resampling approach. Resampling from a real data set ensures that these parameters can be estimated appropriately. Finally, we illustrate the monitoring of a real surgeon's performance using VLAD with V-mask. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Use of risk-adjusted CUSUM charts to monitor 30-day mortality in Danish hospitals

    Directory of Open Access Journals (Sweden)

    Rasmussen TB

    2018-04-01

    Full Text Available Thomas Bøjer Rasmussen, Sinna Pilgaard Ulrichsen, Mette Nørgaard Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus N, Denmark Background: Monitoring hospital outcomes and clinical processes as a measure of clinical performance is an integral part of modern health care. The risk-adjusted cumulative sum (CUSUM chart is a frequently used sequential analysis technique that can be implemented to monitor a wide range of different types of outcomes.Objective: The aim of this study was to describe how risk-adjusted CUSUM charts based on population-based nationwide medical registers were used to monitor 30-day mortality in Danish hospitals and to give an example on how alarms of increased hospital mortality from the charts can guide further in-depth analyses.Materials and methods: We used routinely collected administrative data from the Danish National Patient Registry and the Danish Civil Registration System to create risk-adjusted CUSUM charts. We monitored 30-day mortality after hospital admission with one of 77 selected diagnoses in 24 hospital units in Denmark in 2015. The charts were set to detect a 50% increase in 30-day mortality, and control limits were determined by simulations.Results: Among 1,085,576 hospital admissions, 441,352 admissions had one of the 77 selected diagnoses as their primary diagnosis and were included in the risk-adjusted CUSUM charts. The charts yielded a total of eight alarms of increased mortality. The median of the hospitals’ estimated average time to detect a 50% increase in 30-day mortality was 50 days (interquartile interval, 43;54. In the selected example of an alarm, descriptive analyses indicated performance problems with 30-day mortality following hip fracture surgery and diagnosis of chronic obstructive pulmonary disease.Conclusion: The presented implementation of risk-adjusted CUSUM charts can detect significant increases in 30-day mortality within 2 months, on average, in most

  17. Monitoring the quality of total hip replacement in a tertiary care department using a cumulative summation statistical method (CUSUM).

    Science.gov (United States)

    Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P

    2011-09-01

    The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.

  18. On the Mathematics behind the CUSUM Control Charts

    DEFF Research Database (Denmark)

    Madsen, Henrik

    1998-01-01

    This paper describes the mathematics behind CUSUM control charts. An introduction to CUSUM charts is found in (Madsen 1998).CUSUM charts are well suited for checking a measuring system in operation for any departure from some target or specified values. In general they can be used for:- detecting...... a drift (or shift in the level) of the measuring system, and- detecting a change of the precision of the measuring system.In both cases the CUSUM procedure contains methods for estimating the shift such that maintenance (e.g. recalibration) can take place.The CUSUM procedure is described in ISO/CD 7871...

  19. TREND: a program using cumulative sum methods to detect long-term trends in data

    International Nuclear Information System (INIS)

    Cranston, R.J.; Dunbar, R.M.; Jarvis, R.G.

    1976-01-01

    TREND is a computer program, in FORTRAN, to investigate data for long-term trends that are masked by short-term statistical fluctuations. To do this, it calculates and plots the cumulative sum of deviations from a chosen mean. As a further aid to diagnosis, the procedure can be repeated with a summation of the cumulative sum itself. (author)

  20. Estudio sobre el aprendizaje y efectividad de la intubación orotraqueal en paciente con inestabilidad cervical a través de mascarilla Fastrach versus videolaringoscopia McGrath mediante curvas CuSum en modelo simulado SImMan

    Directory of Open Access Journals (Sweden)

    José María Sistac Ballarin

    2016-04-01

    Conclusiones: Las curvas CuSum obtenidas con los sujetos a estudio resultaron ideales. La mascarilla laríngea Fastrach fue según los tiempos obtenidos ligeramente más rápida, que el VL Mc Grath. Los tiempos obtenidos con el VL Mc Grath experimentaron una optimización más evidente conforme se iba adquiriendo experiencia.

  1. Automatic cumulative sums contour detection of FBP-reconstructed multi-object nuclear medicine images.

    Science.gov (United States)

    Protonotarios, Nicholas E; Spyrou, George M; Kastis, George A

    2017-06-01

    The problem of determining the contours of objects in nuclear medicine images has been studied extensively in the past, however most of the analysis has focused on a single object as opposed to multiple objects. The aim of this work is to develop an automated method for determining the contour of multiple objects in positron emission tomography (PET) and single photon emission computed tomography (SPECT) filtered backprojection (FBP) reconstructed images. These contours can be used for computing body edges for attenuation correction in PET and SPECT, as well as for eliminating streak artifacts outside the objects, which could be useful in compressive sensing reconstruction. Contour detection has been accomplished by applying a modified cumulative sums (CUSUM) scheme in the sinogram. Our approach automatically detects all objects in the image, without requiring a priori knowledge of the number of distinct objects in the reconstructed image. This method has been tested in simulated phantoms, such as an image-quality (IQ) phantom and two digital multi-object phantoms, as well as a real NEMA phantom and a clinical thoracic study. For this purpose, a GE Discovery PET scanner was employed. The detected contours achieved root mean square accuracy of 1.14 pixels, 1.69 pixels and 3.28 pixels and a Hausdorff distance of 3.13, 3.12 and 4.50 pixels, for the simulated image-quality phantom PET study, the real NEMA phantom and the clinical thoracic study, respectively. These results correspond to a significant improvement over recent results obtained in similar studies. Furthermore, we obtained an optimal sub-pattern assignment (OSPA) localization error of 0.94 and 1.48, for the two-objects and three-objects simulated phantoms, respectively. Our method performs efficiently for sets of convex objects and hence it provides a robust tool for automatic contour determination with precise results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Memory-type control charts for monitoring the process dispersion

    NARCIS (Netherlands)

    Abbas, N.; Riaz, M.; Does, R.J.M.M.

    2014-01-01

    Control charts have been broadly used for monitoring the process mean and dispersion. Cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) control charts are memory control charts as they utilize the past information in setting up the control structure. This makes CUSUM and

  3. Identification of the period of stability in a balance test after stepping up using a simplified cumulative sum.

    Science.gov (United States)

    Safieddine, Doha; Chkeir, Aly; Herlem, Cyrille; Bera, Delphine; Collart, Michèle; Novella, Jean-Luc; Dramé, Moustapha; Hewson, David J; Duchêne, Jacques

    2017-11-01

    Falls are a major cause of death in older people. One method used to predict falls is analysis of Centre of Pressure (CoP) displacement, which provides a measure of balance quality. The Balance Quality Tester (BQT) is a device based on a commercial bathroom scale that calculates instantaneous values of vertical ground reaction force (Fz) as well as the CoP in both anteroposterior (AP) and mediolateral (ML) directions. The entire testing process needs to take no longer than 12 s to ensure subject compliance, making it vital that calculations related to balance are only calculated for the period when the subject is static. In the present study, a method is presented to detect the stabilization period after a subject has stepped onto the BQT. Four different phases of the test are identified (stepping-on, stabilization, balancing, stepping-off), ensuring that subjects are static when parameters from the balancing phase are calculated. The method, based on a simplified cumulative sum (CUSUM) algorithm, could detect the change between unstable and stable stance. The time taken to stabilize significantly affected the static balance variables of surface area and trajectory velocity, and was also related to Timed-up-and-Go performance. Such a finding suggests that the time to stabilize could be a worthwhile parameter to explore as a potential indicator of balance problems and fall risk in older people. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. The Use of Statistical Process Control-Charts for Person-Fit Analysis on Computerized Adaptive Testing. LSAC Research Report Series.

    Science.gov (United States)

    Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.

    In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…

  5. Application of CUSUM charts to detect lameness in a milking robot

    DEFF Research Database (Denmark)

    Pastell, Matti; Madsen, Henrik

    2008-01-01

    shown that the weight distribution between limbs changes when cow get lame. In this paper we suggest CUSUM charts to automatically detect lameness based on the measurements. CUSUM charts are statistical based control charts and are well suited for checking a measuring system in operation for any...

  6. Implementation of anomaly detection algorithms for detecting transmission control protocol synchronized flooding attacks

    CSIR Research Space (South Africa)

    Mkuzangwe, NNP

    2015-08-01

    Full Text Available This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CUSUM) based algorithm...

  7. Estudio sobre el aprendizaje y efectividad de la intubación orotraqueal en paciente con inestabilidad cervical a través de mascarilla Fastrach versus videolaringoscopia McGrath mediante curvas CuSum en modelo simulado SImMan

    OpenAIRE

    José María Sistac Ballarin

    2016-01-01

    Objetivos: Evaluar el aprendizaje de la intubación en condiciones simuladas de paciente con inestabilidad cervical, comparando los resultados y curvas de aprendizaje de la intubación a ciegas a través de mascarilla Fastrach versus la intubación con el videolaringoscopio (VL) McGrath, mediante el método de CuSum. Material y métodos: Participaron cuatro médicos residentes de Anestesiología, dos de 2.° año y dos de 4.°año, y dos médicos adjuntos de Anestesiología. El estudio se realizó en el ...

  8. Cumulative sum control charts for monitoring geometrically inflated Poisson processes: An application to infectious disease counts data.

    Science.gov (United States)

    Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E

    2018-02-01

    In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.

  9. An application of the learning curve-cumulative summation test to evaluate training for endotracheal intubation in emergency medicine.

    Science.gov (United States)

    Je, Sangmo; Cho, Youngsuk; Choi, Hyuk Joong; Kang, Boseung; Lim, Taeho; Kang, Hyunggoo

    2015-04-01

    The learning curve-cumulative summation (LC-CUSUM) test allows for quantitative and individual assessments of the learning process. In this study, we evaluated the process of skill acquisition for performing endotracheal intubation (ETI) in three emergency medicine (EM) residents over a 2 year period in their first 2 years of their EM residency. We evaluated 342 ETI cases performed by three EM residents using the LC-CUSUM test according to their rate of success or failure of ETI. A 90% success rate (SR) was chosen to define adequate performance and an SR of 80% was considered inadequate. After the learning phase, the standard CUSUM test was applied to ensure that performance was maintained. The mean number of ETI cases required to reach the predefined level of performance was 74.7 (95% CI 62.0 to 87.3). CUSUM tests confirmed that performance was maintained after the learning phase. By using the LC-CUSUM test, we were able to quantitatively monitor the acquisition of the skill of ETI by EM residents. The LC-CUSUM could be useful for monitoring the learning process for the training of airway management in the practice of EM. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Evaluation of statistical control charts for on-line radiation monitoring

    International Nuclear Information System (INIS)

    Hughes, L.D.; DeVol, T.A.

    2008-01-01

    Statistical control charts are presented for the evaluation of time series radiation counter data from flow cells used for monitoring of low levels of 99 TcO 4 - in environmental solutions. Control chart methods consisted of the 3-sigma (3σ) chart, the cumulative sum (CUSUM) chart, and the exponentially weighted moving average (EWMA) chart. Each method involves a control limit based on the detector background which constitutes the detection limit. Both the CUSUM and EWMA charts are suitable to detect and estimate sample concentration requiring less solution volume than when using a 3? control chart. Data presented here indicate that the overall accuracy and precision of the CUSUM method is the best. (author)

  11. A CUSUM analysis of discharge patterns by a hydroelectric dam and discussion of potential effects on the upstream migration of American eel elvers

    International Nuclear Information System (INIS)

    Jessop, B.M.; Harvie, C.J.

    2003-01-01

    American eel elvers are among the diadromous fishes native to the Saint John River in New Brunswick that have been affected by the construction of hydroelectric dams. Before 1980, large numbers of elvers were observed entering the fishway of the Mactaquac Dam in May and June for upstream migration, but their presence abruptly ceased after 1980. A study was conducted to determine why they disappeared at the Mactaquac Dam. A cumulative sum (CUSUM) analysis was performed to determine the variability in magnitude, duration, timing, frequency, and rate of change in the daily and seasonal average level of water discharge associated with the installation of the last two of six turbines in late 1979 and early 1980. It is believed that the rapid, short-term fluctuations in water discharge which is characteristic of peaking hydroelectric dam operations, could seriously affect life cycle transitions of diadromous fishes. Upstream and downstream migration of the elvers may be affected along with their abundance, diversity and productivity. Young fish in particular are at higher risk of mortality during high flows, depending on the species. 40 refs., 3 tabs., 10 figs

  12. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  13. Direct comparison of risk-adjusted and non-risk-adjusted CUSUM analyses of coronary artery bypass surgery outcomes.

    Science.gov (United States)

    Novick, Richard J; Fox, Stephanie A; Stitt, Larry W; Forbes, Thomas L; Steiner, Stefan

    2006-08-01

    We previously applied non-risk-adjusted cumulative sum methods to analyze coronary bypass outcomes. The objective of this study was to assess the incremental advantage of risk-adjusted cumulative sum methods in this setting. Prospective data were collected in 793 consecutive patients who underwent coronary bypass grafting performed by a single surgeon during a period of 5 years. The composite occurrence of an "adverse outcome" included mortality or any of 10 major complications. An institutional logistic regression model for adverse outcome was developed by using 2608 contemporaneous patients undergoing coronary bypass. The predicted risk of adverse outcome in each of the surgeon's 793 patients was then calculated. A risk-adjusted cumulative sum curve was then generated after specifying control limits and odds ratio. This risk-adjusted curve was compared with the non-risk-adjusted cumulative sum curve, and the clinical significance of this difference was assessed. The surgeon's adverse outcome rate was 96 of 793 (12.1%) versus 270 of 1815 (14.9%) for all the other institution's surgeons combined (P = .06). The non-risk-adjusted curve reached below the lower control limit, signifying excellent outcomes between cases 164 and 313, 323 and 407, and 667 and 793, but transgressed the upper limit between cases 461 and 478. The risk-adjusted cumulative sum curve never transgressed the upper control limit, signifying that cases preceding and including 461 to 478 were at an increased predicted risk. Furthermore, if the risk-adjusted cumulative sum curve was reset to zero whenever a control limit was reached, it still signaled a decrease in adverse outcome at 166, 653, and 782 cases. Risk-adjusted cumulative sum techniques provide incremental advantages over non-risk-adjusted methods by not signaling a decrement in performance when preoperative patient risk is high.

  14. Learning curve for robotic-assisted surgery for rectal cancer: use of the cumulative sum method.

    Science.gov (United States)

    Yamaguchi, Tomohiro; Kinugasa, Yusuke; Shiomi, Akio; Sato, Sumito; Yamakawa, Yushi; Kagawa, Hiroyasu; Tomioka, Hiroyuki; Mori, Keita

    2015-07-01

    Few data are available to assess the learning curve for robotic-assisted surgery for rectal cancer. The aim of the present study was to evaluate the learning curve for robotic-assisted surgery for rectal cancer by a surgeon at a single institute. From December 2011 to August 2013, a total of 80 consecutive patients who underwent robotic-assisted surgery for rectal cancer performed by the same surgeon were included in this study. The learning curve was analyzed using the cumulative sum method. This method was used for all 80 cases, taking into account operative time. Operative procedures included anterior resections in 6 patients, low anterior resections in 46 patients, intersphincteric resections in 22 patients, and abdominoperineal resections in 6 patients. Lateral lymph node dissection was performed in 28 patients. Median operative time was 280 min (range 135-683 min), and median blood loss was 17 mL (range 0-690 mL). No postoperative complications of Clavien-Dindo classification Grade III or IV were encountered. We arranged operative times and calculated cumulative sum values, allowing differentiation of three phases: phase I, Cases 1-25; phase II, Cases 26-50; and phase III, Cases 51-80. Our data suggested three phases of the learning curve in robotic-assisted surgery for rectal cancer. The first 25 cases formed the learning phase.

  15. Automatic Threshold Determination for a Local Approach of Change Detection in Long-Term Signal Recordings

    Directory of Open Access Journals (Sweden)

    David Hewson

    2007-01-01

    Full Text Available CUSUM (cumulative sum is a well-known method that can be used to detect changes in a signal when the parameters of this signal are known. This paper presents an adaptation of the CUSUM-based change detection algorithms to long-term signal recordings where the various hypotheses contained in the signal are unknown. The starting point of the work was the dynamic cumulative sum (DCS algorithm, previously developed for application to long-term electromyography (EMG recordings. DCS has been improved in two ways. The first was a new procedure to estimate the distribution parameters to ensure the respect of the detectability property. The second was the definition of two separate, automatically determined thresholds. One of them (lower threshold acted to stop the estimation process, the other one (upper threshold was applied to the detection function. The automatic determination of the thresholds was based on the Kullback-Leibler distance which gives information about the distance between the detected segments (events. Tests on simulated data demonstrated the efficiency of these improvements of the DCS algorithm.

  16. USO DE CURVAS CUSUM COMO MÉTODO DE ENSEÑANZA EN VIDEO LARINGOSCOPIA

    OpenAIRE

    Burbano, Mario Andrés Zamudio; Novoa, Edgar Alfonso Ramírez

    2017-01-01

    Resumen: Objetivo: describir el uso de curvas de aprendizaje de sumatoria acumulada (Cusum) como herramienta objetiva de enseñanza en videolaringoscopia. Importancia del Tema: se requiere de una herramienta objetiva para la enseñanza de habilidades en anestesia para alcanzar el mejor porcentaje de éxito. Metodologí a: de 165 procedimientos de intubación realizados por diez residentes del grupo de vía aérea de la Universidad de Antioquia se crearon curvas Cusum, según los estándares de educ...

  17. A novel CUSUM-based approach for event detection in smart metering

    Science.gov (United States)

    Zhu, Zhicheng; Zhang, Shuai; Wei, Zhiqiang; Yin, Bo; Huang, Xianqing

    2018-03-01

    Non-intrusive load monitoring (NILM) plays such a significant role in raising consumer awareness on household electricity use to reduce overall energy consumption in the society. With regard to monitoring low power load, many researchers have introduced CUSUM into the NILM system, since the traditional event detection method is not as effective as expected. Due to the fact that the original CUSUM faces limitations given the small shift is below threshold, we therefore improve the test statistic which allows permissible deviation to gradually rise as the data size increases. This paper proposes a novel event detection and corresponding criterion that could be used in NILM systems to recognize transient states and to help the labelling task. Its performance has been tested in a real scenario where eight different appliances are connected to main line of electric power.

  18. Cumulative Poisson Distribution Program

    Science.gov (United States)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  19. Aplicação da curva CUSUM para avaliar o treinamento da intubação orotraqueal com o laringoscópio Truview EVO2® Aplicación de la curva CUSUM para evaluar el entrenamiento de la intubación orotraqueal con el laringoscopio Truview Evo2® Using the Cusum curve to evaluate the training of orotracheal intubation with the Truview EVO2® laryngoscope

    Directory of Open Access Journals (Sweden)

    Jaqueline Betina Broenstrup Correa

    2009-06-01

    using the CUSUM cumulative addition method. RESULTS: It was calculated that the 105 OTIs were necessary to achieve proficiency. The four trainees crossed the line of acceptable failure rate of 5% before completing 105 OTIs; the first trainee reached proficiency after 42 OTIs, the second and third after 56 OTIs, and the fourth after 97 OTIs, and from then on their performance remained constant. Differences in the success rate between residents and experienced anesthesiologists were not observed. CONCLUSIONS: The CUSUM learning curve is a useful instrument to demonstrate objectively the ability when performing a new task. Laryngoscopy with the Truview EVO2® in a mannequin proved to be an easy procedure for physicians with prior experience in OTI; however, one should be cautious when transposing those results to clinical practice.

  20. A Bayesian CUSUM plot: Diagnosing quality of treatment.

    Science.gov (United States)

    Rosthøj, Steen; Jacobsen, Rikke-Line

    2017-12-01

    To present a CUSUM plot based on Bayesian diagnostic reasoning displaying evidence in favour of "healthy" rather than "sick" quality of treatment (QOT), and to demonstrate a technique using Kaplan-Meier survival curves permitting application to case series with ongoing follow-up. For a case series with known final outcomes: Consider each case a diagnostic test of good versus poor QOT (expected vs. increased failure rates), determine the likelihood ratio (LR) of the observed outcome, convert LR to weight taking log to base 2, and add up weights sequentially in a plot showing how many times odds in favour of good QOT have been doubled. For a series with observed survival times and an expected survival curve: Divide the curve into time intervals, determine "healthy" and specify "sick" risks of failure in each interval, construct a "sick" survival curve, determine the LR of survival or failure at the given observation times, convert to weights, and add up. The Bayesian plot was applied retrospectively to 39 children with acute lymphoblastic leukaemia with completed follow-up, using Nordic collaborative results as reference, showing equal odds between good and poor QOT. In the ongoing treatment trial, with 22 of 37 children still at risk for event, QOT has been monitored with average survival curves as reference, odds so far favoring good QOT 2:1. QOT in small patient series can be assessed with a Bayesian CUSUM plot, retrospectively when all treatment outcomes are known, but also in ongoing series with unfinished follow-up. © 2017 John Wiley & Sons, Ltd.

  1. Decompounding random sums: A nonparametric approach

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...

  2. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  3. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2017-01-01

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  4. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi

    2017-12-14

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  5. Choice of the parameters of the cusum algorithms for parameter estimation in the markov modulated poisson process

    OpenAIRE

    Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich

    2016-01-01

    CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.

  6. Detecting SYN flood attacks via statistical monitoring charts: A comparative study

    KAUST Repository

    Bouyeddou, Benamar

    2017-12-14

    Accurate detection of cyber-attacks plays a central role in safeguarding computer networks and information systems. This paper addresses the problem of detecting SYN flood attacks, which are the most popular Denial of Service (DoS) attacks. Here, we compare the detection capacity of three commonly monitoring charts namely, a Shewhart chart, a Cumulative Sum (CUSUM) control chart and exponentially weighted moving average (EWMA) chart, in detecting SYN flood attacks. The comparison study is conducted using the publicly available benchmark datasets: the 1999 DARPA Intrusion Detection Evaluation Datasets.

  7. Pilot simulation study using meat inspection data for syndromic surveillance: use of whole carcass condemnation of adult cattle to assess the performance of several algorithms for outbreak detection.

    Science.gov (United States)

    Dupuy, C; Morignat, E; Dorea, F; Ducrot, C; Calavas, D; Gay, E

    2015-09-01

    The objective of this study was to assess the performance of several algorithms for outbreak detection based on weekly proportions of whole carcass condemnations. Data from one French slaughterhouse over the 2005-2009 period were used (177 098 slaughtered cattle, 0.97% of whole carcass condemnations). The method involved three steps: (i) preparation of an outbreak-free historical baseline over 5 years, (ii) simulation of over 100 years of baseline time series with injection of artificial outbreak signals with several shapes, durations and magnitudes, and (iii) assessment of the performance (sensitivity, specificity, outbreak detection precocity) of several algorithms to detect these artificial outbreak signals. The algorithms tested included the Shewart p chart, confidence interval of the negative binomial model, the exponentially weighted moving average (EWMA); and cumulative sum (CUSUM). The highest sensitivity was obtained using a negative binomial algorithm and the highest specificity with CUSUM or EWMA. EWMA sensitivity was too low to select this algorithm for efficient outbreak detection. CUSUM's performance was complementary to the negative binomial algorithm. The use of both algorithms on real data for a prospective investigation of the whole carcass condemnation rate as a syndromic surveillance indicator could be relevant. Shewart could also be a good option considering its high sensitivity and simplicity of implementation.

  8. Statistical fault diagnosis of wind turbine drivetrain applied to a 5MW floating wind turbine

    DEFF Research Database (Denmark)

    Ghane, Mahdi; Nejad, Amir R.; Blanke, Mogens

    2016-01-01

    to prevent them to develop into failure, statistical change detection is used in this paper. The Cumulative Sum Method (CUSUM) is employed to detect possible defects in the downwind main bearing. A high fidelity gearbox model on a 5-MW spar-type wind turbine is used to generate data for fault-free and faulty...... conditions of the bearing at the rated wind speed and the associated wave condition. Acceleration measurements are utilized to find residuals used to indirectly detect damages in the bearing. Residuals are found to be nonGaussian, following a t-distribution with multivariable characteristic parameters...

  9. Systems concepts for DOE facilities: analysis of PF/LASS data

    International Nuclear Information System (INIS)

    Bearse, R.C.; Shirk, D.G.; Marshall, R.S.; Thomas, C.C. Jr.

    1982-06-01

    We have analyzed Plutonium Facility/Los Alamos Safeguards System (PF/LASS) data for the Fast Flux Test Facility (FFTF) process. Highlights of the work are: the PF/LASS data base provides useful information for accountability purposes, some measurement code assignments appear to be in error, some other data are erroneous, and material in process (MIP) and cumulative sum (CUSUM) charts are powerful indicators of trouble areas. From these studies we recommend re-examination of instrument biases, adoption of new naming procedures for collection batches, improvement of measurement code assignment reliability, revision of round-off procedures, and strengthening of measurement control procedures

  10. New Results On the Sum of Two Generalized Gaussian Random Variables

    KAUST Repository

    Soury, Hamza

    2015-01-01

    We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented.

  11. New Results on the Sum of Two Generalized Gaussian Random Variables

    KAUST Repository

    Soury, Hamza

    2016-01-06

    We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented [1].

  12. New Results on the Sum of Two Generalized Gaussian Random Variables

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2016-01-01

    We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented [1].

  13. Clinical and biochemical heterogeneity between patients with glycogen storage disease type IA: the added value of CUSUM for metabolic control.

    Science.gov (United States)

    Peeks, Fabian; Steunenberg, Thomas A H; de Boer, Foekje; Rubio-Gozalbo, M Estela; Williams, Monique; Burghard, Rob; Rajas, Fabienne; Oosterveer, Maaike H; Weinstein, David A; Derks, Terry G J

    2017-09-01

    To study heterogeneity between patients with glycogen storage disease type Ia (GSD Ia), a rare inherited disorder of carbohydrate metabolism caused by the deficiency of glucose-6-phosphatase (G6Pase). Descriptive retrospective study of longitudinal clinical and biochemical data and long-term complications in 20 GSD Ia patients. We included 11 patients with homozygous G6PC mutations and siblings from four families carrying identical G6PC genotypes. To display subtle variations for repeated triglyceride measurements with respect to time for individual patients, CUSUM-analysis graphs were constructed. Patients with different homozygous G6PC mutations showed important differences in height, BMI, and biochemical parameters (i.e., lactate, uric acid, triglyceride, and cholesterol concentrations). Furthermore, CUSUM-analysis predicts and displays subtle changes in longitudinal blood triglyceride concentrations. Siblings in families also displayed important differences in biochemical parameters (i.e., lactate, uric acid, triglycerides, and cholesterol concentrations) and long-term complications (i.e., liver adenomas, nephropathy, and osteopenia/osteoporosis). Differences between GSD Ia patients reflect large clinical and biochemical heterogeneity. Heterogeneity between GSD Ia patients with homozygous G6PC mutations indicate an important role of the G6PC genotype/mutations. Differences between affected siblings suggest an additional role (genetic and/or environmental) of modifying factors defining the GSD Ia phenotype. CUSUM-analysis can facilitate single-patient monitoring of metabolic control and future application of this method may improve precision medicine for patients both with GSD and remaining inherited metabolic diseases.

  14. Cumulants in perturbation expansions for non-equilibrium field theory

    International Nuclear Information System (INIS)

    Fauser, R.

    1995-11-01

    The formulation of perturbation expansions for a quantum field theory of strongly interacting systems in a general non-equilibrium state is discussed. Non-vanishing initial correlations are included in the formulation of the perturbation expansion in terms of cumulants. The cumulants are shown to be the suitable candidate for summing up the perturbation expansion. Also a linked-cluster theorem for the perturbation series with cumulants is presented. Finally a generating functional of the perturbation series with initial correlations is studied. We apply the methods to a simple model of a fermion-boson system. (orig.)

  15. On a method to detect long-latency excitations and inhibitions of single hand muscle motoneurons in man.

    Science.gov (United States)

    Awiszus, F; Feistner, H; Schäfer, S S

    1991-01-01

    The peri-stimulus-time histogram (PSTH) analysis of stimulus-related neuronal spike train data is usually regarded as a method to detect stimulus-induced excitations or inhibitions. However, for a fairly regularly discharging neuron such as the human alpha-motoneuron, long-latency modulations of a PSTH are difficult to interpret as PSTH modulations can also occur as a consequence of a modulated neuronal autocorrelation. The experiments reported here were made (i) to investigate the extent to which a PSTH of a human hand-muscle motoneuron may be contaminated by features of the autocorrelation and (ii) to develop methods that display the motoneuronal excitations and inhibitions without such contamination. Responses of 29 single motor units to electrical ulnar nerve stimulation below motor threshold were investigated in the first dorsal interosseous muscle of three healthy volunteers using an experimental protocol capable of demonstrating the presence of autocorrelative modulations in the neuronal response. It was found for all units that the PSTH as well as the cumulative sum (CUSUM) derived from these responses were severely affected by the presence of autocorrelative features. On the other hand, calculating the CUSUM in a slightly modified form yielded--for all units investigated--a neuronal output feature sensitive only to motoneuronal excitations and inhibitions induced by the afferent volley. The price that has to be paid to arrive at such a modified CUSUM (mCUSUM) was a high computational effort prohibiting the on-line availability of this output feature during the experiment. It was found, however, that an interspike interval superposition plot (IISP)--easily obtainable during the experiment--is also free of autocorrelative features.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Analysis of the learning curve for peroral endoscopic myotomy for esophageal achalasia: Single-center, two-operator experience.

    Science.gov (United States)

    Lv, Houning; Zhao, Ningning; Zheng, Zhongqing; Wang, Tao; Yang, Fang; Jiang, Xihui; Lin, Lin; Sun, Chao; Wang, Bangmao

    2017-05-01

    Peroral endoscopic myotomy (POEM) has emerged as an advanced technique for the treatment of achalasia, and defining the learning curve is mandatory. From August 2011 to June 2014, two operators in our institution (A&B) carried out POEM on 35 and 33 consecutive patients, respectively. Moving average and cumulative sum (CUSUM) methods were used to analyze the POEM learning curve for corrected operative time (cOT), referring to duration of per centimeter myotomy. Additionally, perioperative outcomes were compared among distinct learning curve phases. Using the moving average method, cOT reached a plateau at the 29th case and at the 24th case for operators A and B, respectively. CUSUM analysis identified three phases: initial learning period (Phase 1), efficiency period (Phase 2) and mastery period (Phase 3). The relatively smooth state in the CUSUM graph occurred at the 26th case and at the 24th case for operators A and B, respectively. Mean cOT of distinct phases for operator A were 8.32, 5.20 and 3.97 min, whereas they were 5.99, 3.06 and 3.75 min for operator B, respectively. Eckardt score and lower esophageal sphincter pressure significantly decreased during the 1-year follow-up period. Data were comparable regarding patient characteristics and perioperative outcomes. This single-center study demonstrated that expert endoscopists with experience in esophageal endoscopic submucosal dissection reached a plateau in learning of POEM after approximately 25 cases. © 2016 Japan Gastroenterological Endoscopy Society.

  17. Learning curve for the management of tyrosine kinase inhibitors as the first line of treatment for patients with metastatic renal cancer.

    Science.gov (United States)

    Lendínez-Cano, G; Osman García, I; Congregado Ruiz, C B; Conde Sánchez, J M; Medina López, R A

    2018-03-07

    To analyse the learning curve for the management of tyrosine kinase inhibitors as the first line of treatment for patients with metastatic renal cancer. We evaluated 32 consecutive patients treated in our department for metastatic renal cancer with tyrosine kinase inhibitors (pazopanib or sunitinib) as first-line treatment between September 2012 and November 2015. We retrospectively analysed this sample. We measured the time to the withdrawal of the first-line treatment, the time to progression and overall survival using Kaplan-Meier curves. The learning curve was analysed with the cumulative sum (CUSUM) methodology. In our series, the median time to the withdrawal of the first-line treatment was 11 months (95% CI 4.9-17.1). The mean time to progression was 30.4 months (95% CI 22.7-38.1), and the mean overall survival was 34.9 months (95% CI 27.8-42). By applying the CUSUM methodology, we obtained a graph for the CUSUM value of the time to withdrawal of the first-line treatment (CUSUM TW), observing 3 well-differentiated phases: phase 1 or initial learning phase (1-15), phase 2 (16-26) in which the management of the drug progressively improved and phase 3 (27-32) of maximum experience or mastery of the management of these drugs. The number of treated patients needed to achieve the proper management of these patients was estimated at 15. Despite the limitations of the sample size and follow-up time, we estimated (in 15 patients) the number needed to reach the necessary experience in the management of these patients with tyrosine kinase inhibitors. We observed no relationship between the time to the withdrawal of the first-line treatment for any cause and progression. Copyright © 2018 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Cumulative radiation dose of multiple trauma patients during their hospitalization

    International Nuclear Information System (INIS)

    Wang Zhikang; Sun Jianzhong; Zhao Zudan

    2012-01-01

    Objective: To study the cumulative radiation dose of multiple trauma patients during their hospitalization and to analyze the dose influence factors. Methods: The DLP for CT and DR were retrospectively collected from the patients during June, 2009 and April, 2011 at a university affiliated hospital. The cumulative radiation doses were calculated by summing typical effective doses of the anatomic regions scanned. Results: The cumulative radiation doses of 113 patients were collected. The maximum,minimum and the mean values of cumulative effective doses were 153.3, 16.48 mSv and (52.3 ± 26.6) mSv. Conclusions: Multiple trauma patients have high cumulative radiation exposure. Therefore, the management of cumulative radiation doses should be enhanced. To establish the individualized radiation exposure archives will be helpful for the clinicians and technicians to make decision whether to image again and how to select the imaging parameters. (authors)

  19. The challenge of cumulative impacts

    Energy Technology Data Exchange (ETDEWEB)

    Masden, Elisabeth

    2011-07-01

    Full text: As governments pledge to combat climate change, wind turbines are becoming a common feature of terrestrial and marine environments. Although wind power is a renewable energy source and a means of reducing carbon emissions, there is a need to ensure that the wind farms themselves do not damage the environment. There is particular concern over the impacts of wind farms on bird populations, and with increasing numbers of wind farm proposals, the concern focuses on cumulative impacts. Individually, a wind farm, or indeed any activity/action, may have minor effects on the environment, but collectively these may be significant, potentially greater than the sum of the individual parts acting alone. Cumulative impact assessment is a legislative requirement of environmental impact assessment but such assessments are rarely adequate restricting the acquisition of basic knowledge about the cumulative impacts of wind farms on bird populations. Reasons for this are numerous but a recurring theme is the lack of clear definitions and guidance on how to perform cumulative assessments. Here we present a conceptual framework and include illustrative examples to demonstrate how the framework can be used to improve the planning and execution of cumulative impact assessments. The core concept is that explicit definitions of impacts, actions and scales of assessment are required to reduce uncertainty in the process of assessment and improve communication between stake holders. Only when it is clear what has been included within a cumulative assessment, is it possible to make comparisons between developments. Our framework requires improved legislative guidance on the actions to include in assessments, and advice on the appropriate baselines against which to assess impacts. Cumulative impacts are currently considered on restricted scales (spatial and temporal) relating to individual development assessments. We propose that benefits would be gained from elevating cumulative

  20. Analysis of cumulative energy consumption in an oxy-fuel combustion power plant integrated with a CO2 processing unit

    International Nuclear Information System (INIS)

    Ziębik, Andrzej; Gładysz, Paweł

    2014-01-01

    Highlights: • Oxy-fuel combustion is promising CCS technology. • Sum of direct and indirect energy consumption ought to be consider. • This sum is expressed by cumulative energy consumption. • Input–output analysis is adequate method of CCS modeling. - Abstract: A balance of direct energy consumption is not a sufficient tool for an energy analysis of an oxy-fuel combustion power plant because of the indirect consumption of energy in preceding processes in the energy-technological set of interconnections. The sum of direct and indirect consumption expresses cumulative energy consumption. Based on the “input–output” model of direct energy consumption the mathematical model of cumulative energy consumption concerning an integrated oxy-fuel combustion power plant has been developed. Three groups of energy carriers or materials are to be distinguished, viz. main products, by-products and external supplies not supplementing the main production. The mathematical model of the balance of cumulative energy consumption based on the assumption that the indices of cumulative energy consumption of external supplies (mainly fuels and raw materials) are known a’priori. It results from weak connections between domestic economy and an integrated oxy-fuel combustion power plant. The paper presents both examples of the balances of direct and cumulative energy consumption. The results of calculations of indices of cumulative energy consumption concerning main products are presented. A comparison of direct and cumulative energy effects between three variants has been worked out. Calculations of the indices of cumulative energy consumption were also subjected to sensitive analysis. The influence of the indices of cumulative energy consumption of external supplies (input data), as well as the assumption concerning the utilization of solid by-products of the combustion process have been investigated

  1. Preliminary Measures of Instructor Learning in Teaching Junctional Tourniquet Users.

    Science.gov (United States)

    Kragh, John F; Aden, James K; Shackelford, Stacy; Dubick, Michael A

    2016-01-01

    The objective of the present study was to assess the effect of instructor learning on student performance in use of junctional tourniquets. From a convenience sample of data available after another study, we used a manikin for assessment of control of bleeding from a right groin gunshot wound. Blood loss was measured by the instructor while training users. The data set represented a group of 30 persons taught one at a time. The first measure was a plot of mean blood loss volumes for the sequential users. The second measure was a plot of the cumulative sum (CUSUM) of mean blood loss (BL) volumes for users. Mean blood loss trended down as the instructor gained experience with each newly instructed user. User performance continually improved as the instructor gained more experience with teaching. No plateau effect was observed within the 30 users. The CUSUM plot illustrated a turning point or cusp at the seventh user. The prior portion of the plot (users 1-7) had the greatest improvement; performance did not improve as much thereafter. The improvement after the seventh user was the only change detected in the instructor's trend of performance. The instructor's teaching experience appeared to directly affect user performance; in a model of junctional hemorrhage, the volume of blood loss from the manikin during junctional tourniquet placement was a useful metric of instructor learning. The CUSUM technique detected a small but meaningful change in trend where the instructor learning curve was greatest while working with the first seven users. 2016.

  2. Early learning effect of residents for laparoscopic sigmoid resection.

    Science.gov (United States)

    Bosker, Robbert; Groen, Henk; Hoff, Christiaan; Totte, Eric; Ploeg, Rutger; Pierie, Jean-Pierre

    2013-01-01

    To evaluate the effect of learning the laparoscopic sigmoid resection procedure on resident surgeons; establish a minimum number of cases before a resident surgeon could be expected to achieve proficiency with the procedure; and examine if an analysis could be used to measure and support the clinical evaluation of the surgeon's competence with the procedure. Retrospective analysis of data which was prospective entered in the database. From 2003 to 2007 all patients who underwent a laparoscopic sigmoid resection carried out by senior residents, who completed the procedure as the primary surgeon proctored by an experienced surgeon, were included in the study. A cumulative sum control chart (CUSUM) analysis was used evaluate performance. The procedure was defined as a failure if major intra-operative complications occurred such as intra abdominal organ injury, bleeding, or anastomotic leakage; if an inadequate number of lymph nodes (<12 nodes) were removed; or if conversion to an open surgical procedure was required. Thirteen residents performed 169 laparoscopic sigmoid resections in the period evaluated. A significant majority of the resident surgeons were able to consistently perform the procedure without failure after 11 cases and determined to be competent. One resident was not determined to be competent and the CUSUM score supported these findings. We concluded that at least 11 cases are required for most residents to obtain necessary competence with the laparoscopic sigmoid resection procedure. Evaluation with the CUSUM analysis can be used to measure and support the clinical evaluation of the resident surgeon's competence with the procedure. Copyright © 2013 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Application of classical versus bayesian statistical control charts to on-line radiological monitoring

    International Nuclear Information System (INIS)

    DeVol, T.A.; Gohres, A.A.; Williams, C.L.

    2009-01-01

    False positive and false negative incidence rates of radiological monitoring data from classical and Bayesian statistical process control chart techniques are compared. The on-line monitoring for illicit radioactive material with no false positives or false negatives is the goal of homeland security monitoring, but is unrealistic. However, statistical fluctuations in the detector signal, short detection times, large source to detector distances, and shielding effects make distinguishing between a radiation source and natural background particularly difficult. Experimental time series data were collected using a 1' x 1' LaCl 3 (Ce) based scintillation detector (Scionix, Orlando, FL) under various simulated conditions. Experimental parameters include radionuclide (gamma-ray) energy, activity, density thickness (source to detector distance and shielding), time, and temperature. All statistical algorithms were developed using MATLAB TM . The Shewhart (3-σ) control chart and the cumulative sum (CUSUM) control chart are the classical procedures adopted, while the Bayesian technique is the Shiryayev-Roberts (S-R) control chart. The Shiryayev-Roberts method was the best method for controlling the number of false positive detects, followed by the CUSUM method. However, The Shiryayev-Roberts method, used without modification, resulted in one of the highest false negative incidence rates independent of the signal strength. Modification of The Shiryayev-Roberts statistical analysis method reduced the number of false negatives, but resulted in an increase in the false positive incidence rate. (author)

  4. Monitoring endemic livestock diseases using laboratory diagnostic data: A simulation study to evaluate the performance of univariate process monitoring control algorithms.

    Science.gov (United States)

    Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils

    2016-05-01

    Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster

  5. On the sum of squared η-μ random variates with application to the performance of wireless communication systems

    KAUST Repository

    Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2013-01-01

    The probability density function (PDF) and cumulative distribution function of the sum of L independent but not necessarily identically distributed squared η-μ variates, applicable to the output statistics of maximal ratio combining (MRC) receiver

  6. Accurately Identifying New QoS Violation Driven by High-Distributed Low-Rate Denial of Service Attacks Based on Multiple Observed Features

    Directory of Open Access Journals (Sweden)

    Jian Kang

    2015-01-01

    Full Text Available We propose using multiple observed features of network traffic to identify new high-distributed low-rate quality of services (QoS violation so that detection accuracy may be further improved. For the multiple observed features, we choose F feature in TCP packet header as a microscopic feature and, P feature and D feature of network traffic as macroscopic features. Based on these features, we establish multistream fused hidden Markov model (MF-HMM to detect stealthy low-rate denial of service (LDoS attacks hidden in legitimate network background traffic. In addition, the threshold value is dynamically adjusted by using Kaufman algorithm. Our experiments show that the additive effect of combining multiple features effectively reduces the false-positive rate. The average detection rate of MF-HMM results in a significant 23.39% and 44.64% improvement over typical power spectrum density (PSD algorithm and nonparametric cumulative sum (CUSUM algorithm.

  7. Rapid detection of pandemic influenza in the presence of seasonal influenza

    Science.gov (United States)

    2010-01-01

    Background Key to the control of pandemic influenza are surveillance systems that raise alarms rapidly and sensitively. In addition, they must minimise false alarms during a normal influenza season. We develop a method that uses historical syndromic influenza data from the existing surveillance system 'SERVIS' (Scottish Enhanced Respiratory Virus Infection Surveillance) for influenza-like illness (ILI) in Scotland. Methods We develop an algorithm based on the weekly case ratio (WCR) of reported ILI cases to generate an alarm for pandemic influenza. From the seasonal influenza data from 13 Scottish health boards, we estimate the joint probability distribution of the country-level WCR and the number of health boards showing synchronous increases in reported influenza cases over the previous week. Pandemic cases are sampled with various case reporting rates from simulated pandemic influenza infections and overlaid with seasonal SERVIS data from 2001 to 2007. Using this combined time series we test our method for speed of detection, sensitivity and specificity. Also, the 2008-09 SERVIS ILI cases are used for testing detection performances of the three methods with a real pandemic data. Results We compare our method, based on our simulation study, to the moving-average Cumulative Sums (Mov-Avg Cusum) and ILI rate threshold methods and find it to be more sensitive and rapid. For 1% case reporting and detection specificity of 95%, our method is 100% sensitive and has median detection time (MDT) of 4 weeks while the Mov-Avg Cusum and ILI rate threshold methods are, respectively, 97% and 100% sensitive with MDT of 5 weeks. At 99% specificity, our method remains 100% sensitive with MDT of 5 weeks. Although the threshold method maintains its sensitivity of 100% with MDT of 5 weeks, sensitivity of Mov-Avg Cusum declines to 92% with increased MDT of 6 weeks. For a two-fold decrease in the case reporting rate (0.5%) and 99% specificity, the WCR and threshold methods

  8. The cumulative effect of consecutive winters' snow depth on moose and deer populations: a defence

    Science.gov (United States)

    McRoberts, R.E.; Mech, L.D.; Peterson, R.O.

    1995-01-01

    1. L. D. Mech et al. presented evidence that moose Alces alces and deer Odocoileus virginianus population parameters re influenced by a cumulative effect of three winters' snow depth. They postulated that snow depth affects adult ungulates cumulatively from winter to winter and results in measurable offspring effects after the third winter. 2. F. Messier challenged those findings and claimed that the population parameters studied were instead affected by ungulate density and wolf indexes. 3. This paper refutes Messier's claims by demonstrating that his results were an artifact of two methodological errors. The first was that, in his main analyses, Messier used only the first previous winter's snow depth rather than the sum of the previous three winters' snow depth, which was the primary point of Mech et al. Secondly, Messier smoothed the ungulate population data, which removed 22-51% of the variability from the raw data. 4. When we repeated Messier's analyses on the raw data and using the sum of the previous three winter's snow depth, his findings did not hold up.

  9. Development of nuclear power plant online monitoring system using statistical quality control

    International Nuclear Information System (INIS)

    An, Sang Ha

    2006-02-01

    Statistical Quality Control techniques have been applied to many aspects of industrial engineering. An application to nuclear power plant maintenance and control is also presented that can greatly improve plant safety. As a demonstration of such an approach, a specific system is analyzed: the reactor coolant pumps (RCP) and the fouling resistance of heat exchanger. This research uses Shewart X-bar, R charts, Cumulative Sum charts (CUSUM), and Sequential Probability Ratio Test (SPRT) to analyze the process for the state of statistical control. And we made Control Chart Analyzer (CCA) to support these analyses that can make a decision of error in process. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators. Such a system would provide operators with enough time to respond to possible emergency situations and thus improve plant safety and reliability

  10. Design of Nuclear Power Plant Online Monitoring System

    International Nuclear Information System (INIS)

    An, Sang-ha; Jeong, Yong-hoon; Chang, Soon-heung; Lee, Song-kyu

    2007-01-01

    Statistical Quality Control techniques have been applied to many aspects of industrial engineering. An application to nuclear power plant maintenance and control is also presented that can greatly improve plant safety. As a demonstration of such an approach, a specific system is analyzed: the reactor coolant pumps (RCPs) and the fouling resistance of heat exchanger. This research uses Shewart X-bar, R charts, Cumulative Sum charts (CUSUM), and Sequential Probability Ratio Test (SPRT) to analyze the process for the state of statistical control. And the Control Chart Analyzer (CCA) has been made to support these analyses that can make a decision of error in process. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators. Such a system would provide operators with enough time to respond to possible emergency situations and thus improve plant safety and reliability

  11. Monitoring a robot swarm using a data-driven fault detection approach

    KAUST Repository

    Khaldi, Belkacem

    2017-06-30

    Using swarm robotics system, with one or more faulty robots, to accomplish specific tasks may lead to degradation in performances complying with the target requirements. In such circumstances, robot swarms require continuous monitoring to detect abnormal events and to sustain normal operations. In this paper, an innovative exogenous fault detection method for monitoring robots swarm is presented. The method merges the flexibility of principal component analysis (PCA) models and the greater sensitivity of the exponentially-weighted moving average (EWMA) and cumulative sum (CUSUM) control charts to insidious changes. The method is tested and evaluated on a swarm of simulated foot-bot robots performing a circle formation task, via the viscoelastic control model. We illustrate through simulated data collected from the ARGoS simulator that a significant improvement in fault detection can be obtained by using the proposed method where compared to the conventional PCA-based methods (i.e., T2 and Q).

  12. Presentation of a quality management program in off-pump coronary bypass surgery.

    Science.gov (United States)

    Bougioukakis, Petros; Kluegl, Stefan J; Babin-Ebell, Joerg; Tagarakis, Giorgios I; Mandewirth, Martin; Zacher, Michael; Diegeler, Anno

    2014-01-01

    To increase the number of off-pump coronary procedures at our institution, a new surgical team was formed. The first 3 years of "learning period" were accompanied by a quality management program aimed to control and adjust the surgical process and to ensure the safety and quality of the procedure. All patients were operated on by the same surgeon between January 2004 and December 2006; all procedures were performed under the following quality management protocol. First, a flow chart regulated surgical and anesthetic details. Second, an online file, named "disturbance file," was used to report work flow interruption, disturbance, and intraoperative events, that is, myocardial ischemia, hypotension, conversion to cardiopulmonary bypass, and any violation of the protocol. Each event was coded with 1 point and added to a score (the higher the score is, the greater the disturbance). Outcome parameters known as major events-major cardiac and cerebral events: mortality within 30 days/myocardial infarction confirmed by electrocardiogram or significantly high levels of total creatine kinase-myocardial muscle creatine kinase/reintervention within 30 days/stroke--and new-onset dialysis were also measured. Success was defined as freedom from any of those events and depicted in a cumulative sum control (CUSUM) chart. Outcome data and CUSUM were correlated with the intraoperative Disturbance Index. In total, 490 off-pump coronary bypass operations were performed by the named surgeon during the study period. The 30-day mortality was reduced from 4.0% to 1.9%. Disturbance Index score of greater than 1 declined from 41.6% to 23.3%. All major cardiac and cerebral events declined. The CUSUM chart showed two critical periods during the learning period, which made an adjustment of the protocol necessary. Quality management control is efficient in improving the postoperative results of a surgical procedure. A learning period is of cardinal importance for any new team wishing to engage

  13. 'Outbreak Gold Standard' selection to provide optimized threshold for infectious diseases early-alert based on China Infectious Disease Automated-alert and Response System.

    Science.gov (United States)

    Wang, Rui-Ping; Jiang, Yong-Gen; Zhao, Gen-Ming; Guo, Xiao-Qin; Michael, Engelgau

    2017-12-01

    The China Infectious Disease Automated-alert and Response System (CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control (CDC) at all levels in China. In the CIDARS, thresholds are determined using the "Mean+2SD‟ in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the "Mean +2SD‟ method to the performance of 5 novel algorithms to select optimal "Outbreak Gold Standard (OGS)‟ and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The "Mean+2SD‟, C1, C2, moving average (MA), seasonal model (SM), and cumulative sum (CUSUM) algorithms were applied. Outbreak signals for the predicted value (Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A (chickenpox and mumps), TYPE B (influenza and rubella) and TYPE C [hand foot and mouth disease (HFMD) and scarlet fever]. Optimized thresholds for chickenpox (P 55 ), mumps (P 50 ), influenza (P 40 , P 55 , and P 75 ), rubella (P 45 and P 75 ), HFMD (P 65 and P 70 ), and scarlet fever (P 75 and P 80 ) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.

  14. Cumulative impact assessments and bird/wind farm interactions: Developing a conceptual framework

    International Nuclear Information System (INIS)

    Masden, Elizabeth A.; Fox, Anthony D.; Furness, Robert W.; Bullman, Rhys; Haydon, Daniel T.

    2010-01-01

    The wind power industry has grown rapidly in the UK to meet EU targets of sourcing 20% of energy from renewable sources by 2020. Although wind power is a renewable energy source, there are environmental concerns over increasing numbers of wind farm proposals and associated cumulative impacts. Individually, a wind farm, or indeed any action, may have minor effects on the environment, but collectively these may be significant, potentially greater than the sum of the individual parts acting alone. EU and UK legislation requires a cumulative impact assessment (CIA) as part of Environmental Impact Assessments (EIA). However, in the absence of detailed guidance and definitions, such assessments within EIA are rarely adequate, restricting the acquisition of basic knowledge about the cumulative impacts of wind farms on bird populations. Here we propose a conceptual framework to promote transparency in CIA through the explicit definition of impacts, actions and scales within an assessment. Our framework requires improved legislative guidance on the actions to include in assessments, and advice on the appropriate baselines against which to assess impacts. Cumulative impacts are currently considered on restricted scales (spatial and temporal) relating to individual development EIAs. We propose that benefits would be gained from elevating CIA to a strategic level, as a component of spatially explicit planning.

  15. Multivariate diagnostics and anomaly detection for nuclear safeguards

    International Nuclear Information System (INIS)

    Burr, T.

    1994-01-01

    For process control and other reasons, new and future nuclear reprocessing plants are expected to be increasingly more automated than older plants. As a consequence of this automation, the quantity of data potentially available for safeguards may be much greater in future reprocessing plants than in current plants. The authors first review recent literature that applies multivariate Shewhart and multivariate cumulative sum (Cusum) tests to detect anomalous data. These tests are used to evaluate residuals obtained from a simulated three-tank problem in which five variables (volume, density, and concentrations of uranium, plutonium, and nitric acid) in each tank are modeled and measured. They then present results from several simulations involving transfers between the tanks and between the tanks and the environment. Residuals from a no-fault problem in which the measurements and model predictions are both correct are used to develop Cusum test parameters which are then used to test for faults for several simulated anomalous situations, such as an unknown leak or diversion of material from one of the tanks. The leak can be detected by comparing measurements, which estimate the true state of the tank system, with the model predictions, which estimate the state of the tank system as it ''should'' be. The no-fault simulation compares false alarm behavior for the various tests, whereas the anomalous problems allow one to compare the power of the various tests to detect faults under possible diversion scenarios. For comparison with the multivariate tests, univariate tests are also applied to the residuals

  16. 7 CFR 52.38a - Definitions of terms applicable to statistical sampling.

    Science.gov (United States)

    2010-01-01

    ... the number of defects (or defectives), which exceed the sample unit tolerance (“T”), in a series of... accumulation of defects (or defectives) allowed to exceed the sample unit tolerance (“T”) in any sample unit or consecutive group of sample units. (ii) CuSum value. The accumulated number of defects (or defectives) that...

  17. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  18. Electronuclear sum rules

    International Nuclear Information System (INIS)

    Arenhoevel, H.; Drechsel, D.; Weber, H.J.

    1978-01-01

    Generalized sum rules are derived by integrating the electromagnetic structure functions along lines of constant ratio of momentum and energy transfer. For non-relativistic systems these sum rules are related to the conventional photonuclear sum rules by a scaling transformation. The generalized sum rules are connected with the absorptive part of the forward scattering amplitude of virtual photons. The analytic structure of the scattering amplitudes and the possible existence of dispersion relations have been investigated in schematic relativistic and non-relativistic models. While for the non-relativistic case analyticity does not hold, the relativistic scattering amplitude is analytical for time-like (but not for space-like) photons and relations similar to the Gell-Mann-Goldberger-Thirring sum rule exist. (Auth.)

  19. Analytical explicit formulas of average run length for long memory process with ARFIMA model on CUSUM control chart

    Directory of Open Access Journals (Sweden)

    Wilasinee Peerajit

    2017-12-01

    Full Text Available This paper proposes the explicit formulas for the derivation of exact formulas from Average Run Lengths (ARLs using integral equation on CUSUM control chart when observations are long memory processes with exponential white noise. The authors compared efficiency in terms of the percentage of absolute difference to a similar method to verify the accuracy of the ARLs between the values obtained by the explicit formulas and numerical integral equation (NIE method. The explicit formulas were based on Banach fixed point theorem which was used to guarantee the existence and uniqueness of the solution for ARFIMA(p,d,q. Results showed that the two methods are similar in good agreement with the percentage of absolute difference at less than 0.23%. Therefore, the explicit formulas are an efficient alternative for implementation in real applications because the computational CPU time for ARLs from the explicit formulas are 1 second preferable over the NIE method.

  20. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  1. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  2. Rapid detection of pandemic influenza in the presence of seasonal influenza

    Directory of Open Access Journals (Sweden)

    Robertson Chris

    2010-11-01

    Full Text Available Abstract Background Key to the control of pandemic influenza are surveillance systems that raise alarms rapidly and sensitively. In addition, they must minimise false alarms during a normal influenza season. We develop a method that uses historical syndromic influenza data from the existing surveillance system 'SERVIS' (Scottish Enhanced Respiratory Virus Infection Surveillance for influenza-like illness (ILI in Scotland. Methods We develop an algorithm based on the weekly case ratio (WCR of reported ILI cases to generate an alarm for pandemic influenza. From the seasonal influenza data from 13 Scottish health boards, we estimate the joint probability distribution of the country-level WCR and the number of health boards showing synchronous increases in reported influenza cases over the previous week. Pandemic cases are sampled with various case reporting rates from simulated pandemic influenza infections and overlaid with seasonal SERVIS data from 2001 to 2007. Using this combined time series we test our method for speed of detection, sensitivity and specificity. Also, the 2008-09 SERVIS ILI cases are used for testing detection performances of the three methods with a real pandemic data. Results We compare our method, based on our simulation study, to the moving-average Cumulative Sums (Mov-Avg Cusum and ILI rate threshold methods and find it to be more sensitive and rapid. For 1% case reporting and detection specificity of 95%, our method is 100% sensitive and has median detection time (MDT of 4 weeks while the Mov-Avg Cusum and ILI rate threshold methods are, respectively, 97% and 100% sensitive with MDT of 5 weeks. At 99% specificity, our method remains 100% sensitive with MDT of 5 weeks. Although the threshold method maintains its sensitivity of 100% with MDT of 5 weeks, sensitivity of Mov-Avg Cusum declines to 92% with increased MDT of 6 weeks. For a two-fold decrease in the case reporting rate (0.5% and 99% specificity, the WCR and

  3. Order effect of strain applications in low-cycle cumulative fatigue at high temperatures

    International Nuclear Information System (INIS)

    Bui-Quoc, T.; Biron, A.

    1977-01-01

    Recent test results on cumulative damage with two strain levels on a stainless steel (AISI 304) at room temperature, 537 and 650 0 C show that the sum of cycle-ratios can be significantly smaller than unity for decreasing levels; the opposite has been noted for increasing levels. As a consequence, the use of the linear damage rule (Miner's law) for life predictions is not conservative in many cases. Since the double linear damage rule (DLDR), originally developed by Manson et al. for room temperature applications, takes the order effect of cyclic loading into consideration, an extension of this rule for high temperature cases may be a potentially useful tool. The present paper is concerned with such an extension. For cumulative damage tests with several levels, according to the DLDR, the summation is applied separately for crack initiation and crack propagation stages, and failure is then assumed to occur when the sum is equal to unity for both stages. Application of the DLDR consists in determining the crack propagation stage Nsub(p) associated with a particular number of cycles at failure N, i.e. Nsub(p)=PNsup(a) where exponent a and coefficient P had been assumed to be equal to 0.6 and 14 respectively for several materials at room temperature. When the DLDR is applied (with a=0.6 and P=14) to predict the remaining life at the second strain level (for two-level cumulative damage) for 304 stainless steel at room temperature 537 0 C and 650 0 C, the results show that the damage due to the first strain level is over-emphasized for decreasing levels when the damaging cycle-ratio is small. For increasing levels, the damage is underestimated and in some testing conditions this damage is simply ignored

  4. Simple Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.

  5. Simple Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.

  6. A branch-and-cut-and-price algorithm for the cumulative capacitated vehicle routing problem

    DEFF Research Database (Denmark)

    Wøhlk, Sanne; Lysgaard, Jens

    2014-01-01

    The paper considers the Cumulative Capacitated Vehicle Routing Problem (CCVRP), which is a variation of the well-known Capacitated Vehicle Routing Problem (CVRP). In this problem, the traditional objective of minimizing total distance or time traveled by the vehicles is replaced by minimizing...... the sum of arrival times at the customers. A branch-and-cut-and-price algorithm for obtaining optimal solutions to the problem is proposed. Computational results based on a set of standard CVRP benchmarks are presented....

  7. Cumulative exergy losses associated with the production of lead metal

    Energy Technology Data Exchange (ETDEWEB)

    Szargut, J [Technical Univ. of Silesia, Gliwice (PL). Inst. of Thermal-Engineering; Morris, D R [New Brunswick Univ., Fredericton, NB (Canada). Dept. of Chemical Engineering

    1990-08-01

    Cumulative exergy losses result from the irreversibility of the links of a technological network leading from raw materials and fuels extracted from nature to the product under consideration. The sum of these losses can be apportioned into partial exergy losses (associated with particular links of the technological network) or into constituent exergy losses (associated with constituent subprocesses of the network). The methods of calculation of the partial and constituent exergy losses are presented, taking into account the useful byproducts substituting the major products of other processes. Analyses of partial and constituent exergy losses are made for the technological network of lead metal production. (author).

  8. Expansion around half-integer values, binomial sums, and inverse binomial sums

    International Nuclear Information System (INIS)

    Weinzierl, Stefan

    2004-01-01

    I consider the expansion of transcendental functions in a small parameter around rational numbers. This includes in particular the expansion around half-integer values. I present algorithms which are suitable for an implementation within a symbolic computer algebra system. The method is an extension of the technique of nested sums. The algorithms allow in addition the evaluation of binomial sums, inverse binomial sums and generalizations thereof

  9. A New Sum Analogous to Gauss Sums and Its Fourth Power Mean

    Directory of Open Access Journals (Sweden)

    Shaofeng Ru

    2014-01-01

    Full Text Available The main purpose of this paper is to use the analytic methods and the properties of Gauss sums to study the computational problem of one kind of new sum analogous to Gauss sums and give an interesting fourth power mean and a sharp upper bound estimate for it.

  10. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  11. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  12. Low Birth Weight, Cumulative Obesity Dose, and the Risk of Incident Type 2 Diabetes

    OpenAIRE

    Feng, Cindy; Osgood, Nathaniel D.; Dyck, Roland F.

    2018-01-01

    Background. Obesity history may provide a better understanding of the contribution of obesity to T2DM risk. Methods. 17,634 participants from the 1958 National Child Development Study were followed from birth to 50 years. Cumulative obesity dose, a measure of obesity history, was calculated by subtracting the upper cut-off of the normal BMI from the actual BMI at each follow-up and summing the areas under the obesity dose curve. Hazard ratios (HRs) for diabetes were calculated using Cox regre...

  13. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    Science.gov (United States)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  14. Detection of Severe Respiratory Disease Epidemic Outbreaks by CUSUM-Based Overcrowd-Severe-Respiratory-Disease-Index Model

    Directory of Open Access Journals (Sweden)

    Carlos Polanco

    2013-01-01

    Full Text Available A severe respiratory disease epidemic outbreak correlates with a high demand of specific supplies and specialized personnel to hold it back in a wide region or set of regions; these supplies would be beds, storage areas, hemodynamic monitors, and mechanical ventilators, as well as physicians, respiratory technicians, and specialized nurses. We describe an online cumulative sum based model named Overcrowd-Severe-Respiratory-Disease-Index based on the Modified Overcrowd Index that simultaneously monitors and informs the demand of those supplies and personnel in a healthcare network generating early warnings of severe respiratory disease epidemic outbreaks through the interpretation of such variables. A post hoc historical archive is generated, helping physicians in charge to improve the transit and future allocation of supplies in the entire hospital network during the outbreak. The model was thoroughly verified in a virtual scenario, generating multiple epidemic outbreaks in a 6-year span for a 13-hospital network. When it was superimposed over the H1N1 influenza outbreak census (2008–2010 taken by the National Institute of Medical Sciences and Nutrition Salvador Zubiran in Mexico City, it showed that it is an effective algorithm to notify early warnings of severe respiratory disease epidemic outbreaks with a minimal rate of false alerts.

  15. Detection of Severe Respiratory Disease Epidemic Outbreaks by CUSUM-Based Overcrowd-Severe-Respiratory-Disease-Index Model

    Science.gov (United States)

    Castañón-González, Jorge Alberto; Macías, Alejandro E.; Samaniego, José Lino; Buhse, Thomas; Villanueva-Martínez, Sebastián

    2013-01-01

    A severe respiratory disease epidemic outbreak correlates with a high demand of specific supplies and specialized personnel to hold it back in a wide region or set of regions; these supplies would be beds, storage areas, hemodynamic monitors, and mechanical ventilators, as well as physicians, respiratory technicians, and specialized nurses. We describe an online cumulative sum based model named Overcrowd-Severe-Respiratory-Disease-Index based on the Modified Overcrowd Index that simultaneously monitors and informs the demand of those supplies and personnel in a healthcare network generating early warnings of severe respiratory disease epidemic outbreaks through the interpretation of such variables. A post hoc historical archive is generated, helping physicians in charge to improve the transit and future allocation of supplies in the entire hospital network during the outbreak. The model was thoroughly verified in a virtual scenario, generating multiple epidemic outbreaks in a 6-year span for a 13-hospital network. When it was superimposed over the H1N1 influenza outbreak census (2008–2010) taken by the National Institute of Medical Sciences and Nutrition Salvador Zubiran in Mexico City, it showed that it is an effective algorithm to notify early warnings of severe respiratory disease epidemic outbreaks with a minimal rate of false alerts. PMID:24069063

  16. Emergency Department Chief Complaint and Diagnosis Data to Detect Influenza-Like Illness with an Electronic Medical Record

    Science.gov (United States)

    May, Larissa S.; Griffin, Beth Ann; Bauers, Nicole Maier; Jain, Arvind; Mitchum, Marsha; Sikka, Neal; Carim, Marianne; Stoto, Michael A.

    2010-01-01

    Background: The purpose of syndromic surveillance is early detection of a disease outbreak. Such systems rely on the earliest data, usually chief complaint. The growing use of electronic medical records (EMR) raises the possibility that other data, such as emergency department (ED) diagnosis, may provide more specific information without significant delay, and might be more effective in detecting outbreaks if mechanisms are in place to monitor and report these data. Objective: The purpose of this study is to characterize the added value of the primary ICD-9 diagnosis assigned at the time of ED disposition compared to the chief complaint for patients with influenza-like illness (ILI). Methods: The study was a retrospective analysis of the EMR of a single urban, academic ED with an annual census of over 60, 000 patients per year from June 2005 through May 2006. We evaluate the objective in two ways. First, we characterize the proportion of patients whose ED diagnosis is inconsistent with their chief complaint and the variation by complaint. Second, by comparing time series and applying syndromic detection algorithms, we determine which complaints and diagnoses are the best indicators for the start of the influenza season when compared to the Centers for Disease Control regional data for Influenza-Like Illness for the 2005 to 2006 influenza season using three syndromic surveillance algorithms: univariate cumulative sum (CUSUM), exponentially weighted CUSUM, and multivariate CUSUM. Results: In the first analysis, 29% of patients had a different diagnosis at the time of disposition than suggested by their chief complaint. In the second analysis, complaints and diagnoses consistent with pneumonia, viral illness and upper respiratory infection were together found to be good indicators of the start of the influenza season based on temporal comparison with regional data. In all examples, the diagnosis data outperformed the chief-complaint data. Conclusion: Both analyses

  17. Production of cumulative protons in the pion-carbon interactions at 5 GeV/c

    International Nuclear Information System (INIS)

    Abdinov, O.B.; Bajramov, A.A.; Budagov, Yu.A.; Valkar, Sh.; Dvornik, A.M.; Lomakin, Yu.F.; Majlov, A.A.; Flyagin, V.B.; Kharzheev, Yu.N.

    1983-01-01

    For the π -12 C interactions at the incident momentum of 5 GeV/c the relation between the divergence angle and the sum of kinetic energies of two protons, one of which is emitted into the backward hemisphere, and the other into the forward hemisphere, in the laboratory system is investigated. The obtained results can be considered as an evidence to that the absorption of slow pions is a possible mechanism responsible for the cumulative production of protons in the momentum range of 0.2-0.6 GeV/c

  18. Selecting Sums in Arrays

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2008-01-01

    In an array of n numbers each of the \\binomn2+nUnknown control sequence '\\binom' contiguous subarrays define a sum. In this paper we focus on algorithms for selecting and reporting maximal sums from an array of numbers. First, we consider the problem of reporting k subarrays inducing the k largest...... sums among all subarrays of length at least l and at most u. For this problem we design an optimal O(n + k) time algorithm. Secondly, we consider the problem of selecting a subarray storing the k’th largest sum. For this problem we prove a time bound of Θ(n · max {1,log(k/n)}) by describing...... an algorithm with this running time and by proving a matching lower bound. Finally, we combine the ideas and obtain an O(n· max {1,log(k/n)}) time algorithm that selects a subarray storing the k’th largest sum among all subarrays of length at least l and at most u....

  19. Zero-Sum Flows in Designs

    International Nuclear Information System (INIS)

    Akbari, S.; Khosrovshahi, G.B.; Mofidi, A.

    2010-07-01

    Let D be a t-(v, k, λ) design and let N i (D), for 1 ≤ i ≤ t, be the higher incidence matrix of D, a (0, 1)-matrix of size (v/i) x b, where b is the number of blocks of D. A zero-sum flow of D is a nowhere-zero real vector in the null space of N 1 (D). A zero-sum k-flow of D is a zero-sum flow with values in {±,...,±(k-1)}. In this paper we show that every non-symmetric design admits an integral zero-sum flow, and consequently we conjecture that every non-symmetric design admits a zero-sum 5-flow. Similarly, the definition of zero-sum flow can be extended to N i (D), 1 ≤ i ≤ t. Let D = t-(v,k, (v-t/k-t)) be the complete design. We conjecture that N t (D) admits a zero-sum 3-flow and prove this conjecture for t = 2. (author)

  20. Small sum privacy and large sum utility in data publishing.

    Science.gov (United States)

    Fu, Ada Wai-Chee; Wang, Ke; Wong, Raymond Chi-Wing; Wang, Jia; Jiang, Minhao

    2014-08-01

    While the study of privacy preserving data publishing has drawn a lot of interest, some recent work has shown that existing mechanisms do not limit all inferences about individuals. This paper is a positive note in response to this finding. We point out that not all inference attacks should be countered, in contrast to all existing works known to us, and based on this we propose a model called SPLU. This model protects sensitive information, by which we refer to answers for aggregate queries with small sums, while queries with large sums are answered with higher accuracy. Using SPLU, we introduce a sanitization algorithm to protect data while maintaining high data utility for queries with large sums. Empirical results show that our method behaves as desired. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Sums and Gaussian vectors

    CERN Document Server

    Yurinsky, Vadim Vladimirovich

    1995-01-01

    Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

  2. Prediction of the cumulated dose for external beam irradiation of prostate cancer patients with 3D-CRT technique

    Directory of Open Access Journals (Sweden)

    Giżyńska Marta

    2016-03-01

    Full Text Available Nowadays in radiotherapy, much effort is taken to minimize the irradiated volume and consequently minimize doses to healthy tissues. In our work, we tested the hypothesis that the mean dose distribution calculated from a few first fractions can serve as prediction of the cumulated dose distribution, representing the whole treatment. We made our tests for 25 prostate cancer patients treated with three orthogonal fields technique. We did a comparison of dose distribution calculated as a sum of dose distribution from each fraction with a dose distribution calculated with isocenter shifted for a mean setup error from a few first fractions. The cumulative dose distribution and predicted dose distributions are similar in terms of gamma (3 mm 3% analysis, under condition that we know setup error from seven first fractions. We showed that the dose distribution calculated for the original plan with the isocenter shifted to the point, defined as the original isocenter corrected of the mean setup error estimated from the first seven fractions supports our hypothesis, i.e. can serve as a prediction for cumulative dose distribution.

  3. Using Squares to Sum Squares

    Science.gov (United States)

    DeTemple, Duane

    2010-01-01

    Purely combinatorial proofs are given for the sum of squares formula, 1[superscript 2] + 2[superscript 2] + ... + n[superscript 2] = n(n + 1) (2n + 1) / 6, and the sum of sums of squares formula, 1[superscript 2] + (1[superscript 2] + 2[superscript 2]) + ... + (1[superscript 2] + 2[superscript 2] + ... + n[superscript 2]) = n(n + 1)[superscript 2]…

  4. Credal Sum-Product Networks

    NARCIS (Netherlands)

    Maua, Denis Deratani; Cozman, Fabio Gagli; Conaty, Diarmaid; de Campos, Cassio P.

    2017-01-01

    Sum-product networks are a relatively new and increasingly popular class of (precise) probabilistic graphical models that allow for marginal inference with polynomial effort. As with other probabilistic models, sum-product networks are often learned from data and used to perform classification.

  5. An analytical model for cumulative infiltration into a dual-permeability media

    Science.gov (United States)

    Peyrard, Xavier; Lassabatere, Laurent; Angulo-Jaramillo, Rafael; Simunek, Jiri

    2010-05-01

    Modeling of water infiltration into the vadose zone is important for better understanding of movement of water-transported contaminants. There is a great need to take into account the soil heterogeneity and, in particular, the presence of macropores or cracks that could generate preferential flow. Several mathematical models have been proposed to describe unsaturated flow through heterogeneous soils. The dual-permeability model assumes that flow is governed by Richards equation in both porous regions (matrix and fractures). Water can be exchanged between the two regions following a first-order rate law. A previous study showed that the influence of the hydraulic conductivity of the matrix/macropore interface had a little influence on cumulative infiltration at the soil surface. As a result, one could consider the surface infiltration for a specific case of no water exchange between the fracture and matrix regions (a case of zero interfacial hydraulic conductivity). In such a case, water infiltration can be considered to be the sum of the cumulative infiltrations into the matrix and the fractures. On the basis of analytical models for each sub domain (matrix and fractures), an analytical model is proposed for the entire dual-porosity system. A sensitivity analysis is performed to characterize the influence of several factors, such as the saturated hydraulic conductivity ratio, the water pressure scale parameter ratio, and the saturated volumetric water content scale ratio, on the total cumulative infiltration. Such an analysis greatly helps in quantifying the impact of macroporosity and fractures on water infiltration, which can be of great interest for hydrological models.

  6. Neutrino mass sum-rule

    Science.gov (United States)

    Damanik, Asan

    2018-03-01

    Neutrino mass sum-rele is a very important research subject from theoretical side because neutrino oscillation experiment only gave us two squared-mass differences and three mixing angles. We review neutrino mass sum-rule in literature that have been reported by many authors and discuss its phenomenological implications.

  7. Adaptive strategies for cumulative cultural learning.

    Science.gov (United States)

    Ehn, Micael; Laland, Kevin

    2012-05-21

    The demographic and ecological success of our species is frequently attributed to our capacity for cumulative culture. However, it is not yet known how humans combine social and asocial learning to generate effective strategies for learning in a cumulative cultural context. Here we explore how cumulative culture influences the relative merits of various pure and conditional learning strategies, including pure asocial and social learning, critical social learning, conditional social learning and individual refiner strategies. We replicate the Rogers' paradox in the cumulative setting. However, our analysis suggests that strategies that resolved Rogers' paradox in a non-cumulative setting may not necessarily evolve in a cumulative setting, thus different strategies will optimize cumulative and non-cumulative cultural learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Sums of squares of integers

    CERN Document Server

    Moreno, Carlos J

    2005-01-01

    Introduction Prerequisites Outline of Chapters 2 - 8 Elementary Methods Introduction Some Lemmas Two Fundamental Identities Euler's Recurrence for Sigma(n)More Identities Sums of Two Squares Sums of Four Squares Still More Identities Sums of Three Squares An Alternate Method Sums of Polygonal Numbers Exercises Bernoulli Numbers Overview Definition of the Bernoulli Numbers The Euler-MacLaurin Sum Formula The Riemann Zeta Function Signs of Bernoulli Numbers Alternate The von Staudt-Clausen Theorem Congruences of Voronoi and Kummer Irregular Primes Fractional Parts of Bernoulli Numbers Exercises Examples of Modular Forms Introduction An Example of Jacobi and Smith An Example of Ramanujan and Mordell An Example of Wilton: t (n) Modulo 23 An Example of Hamburger Exercises Hecke's Theory of Modular FormsIntroduction Modular Group ? and its Subgroup ? 0 (N) Fundamental Domains For ? and ? 0 (N) Integral Modular Forms Modular Forms of Type Mk(? 0(N);chi) and Euler-Poincare series Hecke Operators Dirichlet Series and ...

  9. Sum rules in classical scattering

    International Nuclear Information System (INIS)

    Bolle, D.; Osborn, T.A.

    1981-01-01

    This paper derives sum rules associated with the classical scattering of two particles. These sum rules are the analogs of Levinson's theorem in quantum mechanics which provides a relationship between the number of bound-state wavefunctions and the energy integral of the time delay of the scattering process. The associated classical relation is an identity involving classical time delay and an integral over the classical bound-state density. We show that equalities between the Nth-order energy moment of the classical time delay and the Nth-order energy moment of the classical bound-state density hold in both a local and a global form. Local sum rules involve the time delay defined on a finite but otherwise arbitrary coordinate space volume S and the bound-state density associated with this same region. Global sum rules are those that obtain when S is the whole coordinate space. Both the local and global sum rules are derived for potentials of arbitrary shape and for scattering in any space dimension. Finally the set of classical sum rules, together with the known quantum mechanical analogs, are shown to provide a unified method of obtaining the high-temperature expansion of the classical, respectively the quantum-mechanical, virial coefficients

  10. Counting Triangles to Sum Squares

    Science.gov (United States)

    DeMaio, Joe

    2012-01-01

    Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.

  11. Cosmic Sum Rules

    DEFF Research Database (Denmark)

    T. Frandsen, Mads; Masina, Isabella; Sannino, Francesco

    2011-01-01

    We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays and show how it can be used to predict the positron fraction at energies not yet explored by current experiments and to constrain specific models.......We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays and show how it can be used to predict the positron fraction at energies not yet explored by current experiments and to constrain specific models....

  12. Applications of Kalman Filtering to nuclear material control

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.; Westley, G.W.

    1977-10-01

    The feasibility of using modern state estimation techniques (specifically Kalman Filtering and Linear Smoothing) to detect losses of material from material balance areas is evaluated. It is shown that state estimation techniques are not only feasible but in most situations are superior to existing methods of analysis. The various techniques compared include Kalman Filtering, linear smoothing, standard control charts, and average cumulative summation (CUSUM) charts. Analysis results indicated that the standard control chart is the least effective method for detecting regularly occurring losses. An improvement in the detection capability over the standard control chart can be realized by use of the CUSUM chart. Even more sensitivity in the ability to detect losses can be realized by use of the Kalman Filter and the linear smoother. It was found that the error-covariance matrix can be used to establish limits of error for state estimates. It is shown that state estimation techniques represent a feasible and desirable method of theft detection. The technique is usually more sensitive than the CUSUM chart in detecting losses. One kind of loss which is difficult to detect using state estimation techniques is a single isolated loss. State estimation procedures are predicated on dynamic models and are well-suited for detecting losses which occur regularly over several accounting periods. A single isolated loss does not conform to this basic assumption and is more difficult to detect

  13. Applications of Kalman Filtering to nuclear material control. [Kalman filtering and linear smoothing for detecting nuclear material losses

    Energy Technology Data Exchange (ETDEWEB)

    Pike, D.H.; Morrison, G.W.; Westley, G.W.

    1977-10-01

    The feasibility of using modern state estimation techniques (specifically Kalman Filtering and Linear Smoothing) to detect losses of material from material balance areas is evaluated. It is shown that state estimation techniques are not only feasible but in most situations are superior to existing methods of analysis. The various techniques compared include Kalman Filtering, linear smoothing, standard control charts, and average cumulative summation (CUSUM) charts. Analysis results indicated that the standard control chart is the least effective method for detecting regularly occurring losses. An improvement in the detection capability over the standard control chart can be realized by use of the CUSUM chart. Even more sensitivity in the ability to detect losses can be realized by use of the Kalman Filter and the linear smoother. It was found that the error-covariance matrix can be used to establish limits of error for state estimates. It is shown that state estimation techniques represent a feasible and desirable method of theft detection. The technique is usually more sensitive than the CUSUM chart in detecting losses. One kind of loss which is difficult to detect using state estimation techniques is a single isolated loss. State estimation procedures are predicated on dynamic models and are well-suited for detecting losses which occur regularly over several accounting periods. A single isolated loss does not conform to this basic assumption and is more difficult to detect.

  14. Complete cumulative index (1963-1983)

    International Nuclear Information System (INIS)

    1983-01-01

    This complete cumulative index covers all regular and special issues and supplements published by Atomic Energy Review (AER) during its lifetime (1963-1983). The complete cumulative index consists of six Indexes: the Index of Abstracts, the Subject Index, the Title Index, the Author Index, the Country Index and the Table of Elements Index. The complete cumulative index supersedes the Cumulative Indexes for Volumes 1-7: 1963-1969 (1970), and for Volumes 1-10: 1963-1972 (1972); this Index also finalizes Atomic Energy Review, the publication of which has recently been terminated by the IAEA

  15. Momentum sum rules for fragmentation functions

    International Nuclear Information System (INIS)

    Meissner, S.; Metz, A.; Pitonyak, D.

    2010-01-01

    Momentum sum rules for fragmentation functions are considered. In particular, we give a general proof of the Schaefer-Teryaev sum rule for the transverse momentum dependent Collins function. We also argue that corresponding sum rules for related fragmentation functions do not exist. Our model-independent analysis is supplemented by calculations in a simple field-theoretical model.

  16. Multiparty symmetric sum types

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Yoshida, Nobuko; Honda, Kohei

    2010-01-01

    This paper introduces a new theory of multiparty session types based on symmetric sum types, by which we can type non-deterministic orchestration choice behaviours. While the original branching type in session types can represent a choice made by a single participant and accepted by others...... determining how the session proceeds, the symmetric sum type represents a choice made by agreement among all the participants of a session. Such behaviour can be found in many practical systems, including collaborative workflow in healthcare systems for clinical practice guidelines (CPGs). Processes...... with the symmetric sums can be embedded into the original branching types using conductor processes. We show that this type-driven embedding preserves typability, satisfies semantic soundness and completeness, and meets the encodability criteria adapted to the typed setting. The theory leads to an efficient...

  17. Current algebra sum rules for Reggeons

    CERN Document Server

    Carlitz, R

    1972-01-01

    The interplay between the constraints of chiral SU/sub 2/*SU/sub 2/ symmetry and Regge asymptotic behaviour is investigated. The author reviews the derivation of various current algebra sum rules in a study of the reaction pi + alpha to pi + beta . These sum rules imply that all particles may be classified in multiplets of SU/sub 2/*SU/sub 2/ and that each of these multiplets may contain linear combinations of an infinite number of physical states. Extending his study to the reaction pi + alpha to pi + pi + beta , he derives new sum rules involving commutators of the axial charge with the reggeon coupling matrices of the rho and f Regge trajectories. Some applications of these new sum rules are noted, and the general utility of these and related sum rules is discussed. (17 refs).

  18. A comparison between weighted sum of gray and spectral CK radiation models for heat transfer calculations in furnaces

    Energy Technology Data Exchange (ETDEWEB)

    El Ammouri, F; Plessier, R; Till, M; Marie, B; Djavdan, E [Air Liquide Centre de Recherche Claude Delorme, 78 - Jouy-en-Josas (France)

    1997-12-31

    Coupled reactive fluid dynamics and radiation calculations are performed in air and oxy-fuel furnaces using two gas radiative property models. The first one is the weighted sum of gray gases model (WSGG) and the second one is the correlated-k (CK) method which is a spectral model based on the cumulative distribution function of the absorption coefficient inside a narrow band. The WSGG model, generally used in industrial configurations, is less time consuming than the CK model. However it is found that it over-predicts radiative fluxes by about 12 % in industrial furnaces. (authors) 27 refs.

  19. A comparison between weighted sum of gray and spectral CK radiation models for heat transfer calculations in furnaces

    Energy Technology Data Exchange (ETDEWEB)

    El Ammouri, F.; Plessier, R.; Till, M.; Marie, B.; Djavdan, E. [Air Liquide Centre de Recherche Claude Delorme, 78 - Jouy-en-Josas (France)

    1996-12-31

    Coupled reactive fluid dynamics and radiation calculations are performed in air and oxy-fuel furnaces using two gas radiative property models. The first one is the weighted sum of gray gases model (WSGG) and the second one is the correlated-k (CK) method which is a spectral model based on the cumulative distribution function of the absorption coefficient inside a narrow band. The WSGG model, generally used in industrial configurations, is less time consuming than the CK model. However it is found that it over-predicts radiative fluxes by about 12 % in industrial furnaces. (authors) 27 refs.

  20. Sum rules for quasifree scattering of hadrons

    Science.gov (United States)

    Peterson, R. J.

    2018-02-01

    The areas d σ /d Ω of fitted quasifree scattering peaks from bound nucleons for continuum hadron-nucleus spectra measuring d2σ /d Ω d ω are converted to sum rules akin to the Coulomb sums familiar from continuum electron scattering spectra from nuclear charge. Hadronic spectra with or without charge exchange of the beam are considered. These sums are compared to the simple expectations of a nonrelativistic Fermi gas, including a Pauli blocking factor. For scattering without charge exchange, the hadronic sums are below this expectation, as also observed with Coulomb sums. For charge exchange spectra, the sums are near or above the simple expectation, with larger uncertainties. The strong role of hadron-nucleon in-medium total cross sections is noted from use of the Glauber model.

  1. Sum rules for collisional processes

    International Nuclear Information System (INIS)

    Oreg, J.; Goldstein, W.H.; Bar-Shalom, A.; Klapisch, M.

    1991-01-01

    We derive level-to-configuration sum rules for dielectronic capture and for collisional excitation and ionization. These sum rules give the total transition rate from a detailed atomic level to an atomic configuration. For each process, we show that it is possible to factor out the dependence on continuum-electron wave functions. The remaining explicit level dependence of each rate is then obtained from the matrix element of an effective operator acting on the bound orbitals only. In a large class of cases, the effective operator reduces to a one-electron monopole whose matrix element is proportional to the statistical weight of the level. We show that even in these cases, nonstatistical level dependence enters through the dependence of radial integrals on continuum orbitals. For each process, explicit analytic expressions for the level-to-configuration sum rules are given for all possible cases. Together with the well-known J-file sum rule for radiative rates [E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra (University Press, Cambridge, 1935)], the sum rules offer a systematic and efficient procedure for collapsing high-multiplicity configurations into ''effective'' levels for the purpose of modeling the population kinetics of ionized heavy atoms in plasma

  2. QCD Sum Rules, a Modern Perspective

    CERN Document Server

    Colangelo, Pietro; Colangelo, Pietro; Khodjamirian, Alexander

    2001-01-01

    An introduction to the method of QCD sum rules is given for those who want to learn how to use this method. Furthermore, we discuss various applications of sum rules, from the determination of quark masses to the calculation of hadronic form factors and structure functions. Finally, we explain the idea of the light-cone sum rules and outline the recent development of this approach.

  3. On Learning Ring-Sum-Expansions

    DEFF Research Database (Denmark)

    Fischer, Paul; Simon, H. -U.

    1992-01-01

    The problem of learning ring-sum-expansions from examples is studied. Ring-sum-expansions (RSE) are representations of Boolean functions over the base {#123;small infinum, (+), 1}#125;, which reflect arithmetic operations in GF(2). k-RSE is the class of ring-sum-expansions containing only monomials...... of length at most k:. term-RSE is the class of ring-sum-expansions having at most I: monomials. It is shown that k-RSE, k>or=1, is learnable while k-term-RSE, k>2, is not learnable if RPnot=NP. Without using a complexity-theoretical hypothesis, it is proven that k-RSE, k>or=1, and k-term-RSE, k>or=2 cannot...... be learned from positive (negative) examples alone. However, if the restriction that the hypothesis which is output by the learning algorithm is also a k-RSE is suspended, then k-RSE is learnable from positive (negative) examples only. Moreover, it is proved that 2-term-RSE is learnable by a conjunction...

  4. Sums and products of sets and estimates of rational trigonometric sums in fields of prime order

    Energy Technology Data Exchange (ETDEWEB)

    Garaev, Mubaris Z [National Autonomous University of Mexico, Institute of Mathematics (Mexico)

    2010-11-16

    This paper is a survey of main results on the problem of sums and products of sets in fields of prime order and their applications to estimates of rational trigonometric sums. Bibliography: 85 titles.

  5. Sum formulas for reductive algebraic groups

    DEFF Research Database (Denmark)

    Andersen, Henning Haahr; Kulkarni, Upendra

    2008-01-01

    \\supset V^1 \\cdots \\supset V^r = 0$. The sum of the positive terms in this filtration satisfies a well known sum formula. If $T$ denotes a tilting module either for $G$ or $U_q$ then we can similarly filter the space $\\Hom_G(V,T)$, respectively $\\Hom_{U_q}(V,T)$ and there is a sum formula for the positive...... terms here as well. We give an easy and unified proof of these two (equivalent) sum formulas. Our approach is based on an Euler type identity which we show holds without any restrictions on $p$ or $l$. In particular, we get rid of previous such restrictions in the tilting module case....

  6. Cumulative effects of wind turbines. A guide to assessing the cumulative effects of wind energy development

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    This guidance provides advice on how to assess the cumulative effects of wind energy developments in an area and is aimed at developers, planners, and stakeholders interested in the development of wind energy in the UK. The principles of cumulative assessment, wind energy development in the UK, cumulative assessment of wind energy development, and best practice conclusions are discussed. The identification and assessment of the cumulative effects is examined in terms of global environmental sustainability, local environmental quality and socio-economic activity. Supplementary guidance for assessing the principle cumulative effects on the landscape, on birds, and on the visual effect is provided. The consensus building approach behind the preparation of this guidance is outlined in the annexes of the report.

  7. Study of QCD medium by sum rules

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Institute of Nuclear Physics, Calcutta (India)

    1998-08-01

    Though it has no analogue in condensed matter physics, the thermal QCD sum rules can, nevertheless, answer questions of condensed matter type about the QCD medium. The ingredients needed to write such sum rules, viz. the operator product expansion and the spectral representation at finite temperature, are reviewed in detail. The sum rules are then actually written for the case of correlation function of two vector currents. Collecting information on the thermal average of the higher dimension operators from other sources, we evaluate these sum rules for the temperature dependent {rho}-meson parameters. Possibility of extracting more information from the combined set of all sum rules from different correlation functions is also discussed. (author) 30 refs., 2 figs.

  8. Coloring sums of extensions of certain graphs

    Directory of Open Access Journals (Sweden)

    Johan Kok

    2017-12-01

    Full Text Available We recall that the minimum number of colors that allow a proper coloring of graph $G$ is called the chromatic number of $G$ and denoted $\\chi(G$. Motivated by the introduction of the concept of the $b$-chromatic sum of a graph the concept of $\\chi'$-chromatic sum and $\\chi^+$-chromatic sum are introduced in this paper. The extended graph $G^x$ of a graph $G$ was recently introduced for certain regular graphs. This paper furthers the concepts of $\\chi'$-chromatic sum and $\\chi^+$-chromatic sum to extended paths and cycles. Bipartite graphs also receive some attention. The paper concludes with patterned structured graphs. These last said graphs are typically found in chemical and biological structures.

  9. Robinson's radiation damping sum rule: Reaffirmation and extension

    International Nuclear Information System (INIS)

    Mane, S.R.

    2011-01-01

    Robinson's radiation damping sum rule is one of the classic theorems of accelerator physics. Recently Orlov has claimed to find serious flaws in Robinson's proof of his sum rule. In view of the importance of the subject, I have independently examined the derivation of the Robinson radiation damping sum rule. Orlov's criticisms are without merit: I work through Robinson's derivation and demonstrate that Orlov's criticisms violate well-established mathematical theorems and are hence not valid. I also show that Robinson's derivation, and his damping sum rule, is valid in a larger domain than that treated by Robinson himself: Robinson derived his sum rule under the approximation of a small damping rate, but I show that Robinson's sum rule applies to arbitrary damping rates. I also display more concise derivations of the sum rule using matrix differential equations. I also show that Robinson's sum rule is valid in the vicinity of a parametric resonance.

  10. Model dependence of energy-weighted sum rules

    International Nuclear Information System (INIS)

    Kirson, M.W.

    1977-01-01

    The contribution of the nucleon-nucleon interaction to energy-weighted sum rules for electromagnetic multipole transitions is investigated. It is found that only isoscalar electric transitions might have model-independent energy-weighted sum rules. For these transitions, explicit momentum and angular momentum dependence of the nuclear force give rise to corrections to the sum rule which are found to be negligibly small, thus confirming the model independence of these specific sum rules. These conclusions are unaffected by correlation effects. (author)

  11. Electronuclear sum rules for the lightest nuclei

    International Nuclear Information System (INIS)

    Efros, V.D.

    1992-01-01

    It is shown that the model-independent longitudinal electronuclear sum rules for nuclei with A = 3 and A = 4 have an accuracy on the order of a percent in the traditional single-nucleon approximation with free nucleons for the nuclear charge-density operator. This makes it possible to test this approximation by using these sum rules. The longitudinal sum rules for A = 3 and A = 4 are calculated using the wave functions of these nuclei corresponding to a large set of realistic NN interactions. The values of the model-independent sum rules lie in the range of values calculated by this method. Model-independent expressions are obtained for the transverse sum rules for nuclei with A = 3 and A = 4. These sum rules are calculated using a large set of realistic wave functions of these nuclei. The contribution of the convection current and the changes in the results for different versions of realistic NN forces are given. 29 refs., 4 tabs

  12. Extremum uncertainty product and sum states

    Energy Technology Data Exchange (ETDEWEB)

    Mehta, C L; Kumar, S [Indian Inst. of Tech., New Delhi. Dept. of Physics

    1978-01-01

    The extremum product states and sum states of the uncertainties in non-commuting observables have been examined. These are illustrated by two specific examples of harmonic oscillator and the angular momentum states. It shows that the coherent states of the harmonic oscillator are characterized by the minimum uncertainty sum <(..delta..q)/sup 2/>+<(..delta..p)/sup 2/>. The extremum values of the sums and products of the uncertainties of the components of the angular momentum are also obtained.

  13. Inverse-moment chiral sum rules

    International Nuclear Information System (INIS)

    Golowich, E.; Kambor, J.

    1996-01-01

    A general class of inverse-moment sum rules was previously derived by the authors in a chiral perturbation theory (ChPT) study at two-loop order of the isospin and hypercharge vector-current propagators. Here, we address the evaluation of the inverse-moment sum rules in terms of existing data and theoretical constraints. Two kinds of sum rules are seen to occur: those which contain as-yet undetermined O(q 6 ) counterterms and those free of such quantities. We use the former to obtain phenomenological evaluations of two O(q 6 ) counterterms. Light is shed on the important but difficult issue regarding contributions of higher orders in the ChPT expansion. copyright 1996 The American Physical Society

  14. 32 CFR 651.16 - Cumulative impacts.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Cumulative impacts. 651.16 Section 651.16... § 651.16 Cumulative impacts. (a) NEPA analyses must assess cumulative effects, which are the impact on the environment resulting from the incremental impact of the action when added to other past, present...

  15. Harmonic sums and polylogarithms generated by cyclotomic polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2011-05-15

    The computation of Feynman integrals in massive higher order perturbative calculations in renormalizable Quantum Field Theories requires extensions of multiply nested harmonic sums, which can be generated as real representations by Mellin transforms of Poincare-iterated integrals including denominators of higher cyclotomic polynomials. We derive the cyclotomic harmonic polylogarithms and harmonic sums and study their algebraic and structural relations. The analytic continuation of cyclotomic harmonic sums to complex values of N is performed using analytic representations. We also consider special values of the cyclotomic harmonic polylogarithms at argument x=1, resp., for the cyclotomic harmonic sums at N{yields}{infinity}, which are related to colored multiple zeta values, deriving various of their relations, based on the stuffle and shuffle algebras and three multiple argument relations. We also consider infinite generalized nested harmonic sums at roots of unity which are related to the infinite cyclotomic harmonic sums. Basis representations are derived for weight w=1,2 sums up to cyclotomy l=20. (orig.)

  16. Environmental variability and chum salmon production at the northwestern Pacific Ocean

    Science.gov (United States)

    Kim, Suam; Kang, Sukyung; Kim, Ju Kyoung; Bang, Minkyoung

    2017-12-01

    Chum salmon, Oncorhynchus keta, are distributed widely in the North Pacific Ocean, and about 76% of chum salmon were caught from Russian, Japanese, and Korean waters of the northwestern Pacific Ocean during the last 20 years. Although it has been speculated that the recent increase in salmon production was aided by not only the enhancement program that targeted chum salmon but also by favorable ocean conditions since the early 1990s, the ecological processes for determining the yield of salmon have not been clearly delineated. To investigate the relationship between yield and the controlling factors for ocean survival of chum salmon, a time-series of climate indices, seawater temperature, and prey availability in the northwestern Pacific including Korean waters were analyzed using some statistical tools. The results of cross-correlation function (CCF) analysis and cumulative sum (CuSum) of anomalies indicated that there were significant environmental changes in the North Pacific during the last century, and each regional stock of chum salmon responded to the Pacific Decadal Oscillation (PDO) differently: for Russian stock, the correlations between PDO index and catch were significantly negative with a time-lag of 0 and 1 years; for Japanese stock, significantly positive with a timelag of 0-2 years; and for Korean stock, positive but no significant correlation. The results of statistical analyses with Korean chum salmon also revealed that a coastal seawater temperature over 14°C and the return rate of spawning adults to the natal river produced a significant negative correlation.

  17. QCD sum-rules for V-A spectral functions

    International Nuclear Information System (INIS)

    Chakrabarti, J.; Mathur, V.S.

    1980-01-01

    The Borel transformation technique of Shifman et al is used to obtain QCD sum-rules for V-A spectral functions. In contrast to the situation in the original Weinberg sum-rules and those of Bernard et al, the problem of saturating the sum-rules by low lying resonances is brought under control. Furthermore, the present sum-rules, on saturation, directly determine useful phenomenological parameters

  18. On the Sum of Gamma Random Variates With Application to the Performance of Maximal Ratio Combining over Nakagami-m Fading Channels

    KAUST Repository

    Ansari, Imran Shafique

    2012-09-08

    The probability distribution function (PDF) and cumulative density function of the sum of L independent but not necessarily identically distributed gamma variates, applicable to maximal ratio combining receiver outputs or in other words applicable to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels, is presented in closed form in terms of Meijer G-function and Fox H-bar-function for integer valued fading parameters and non-integer valued fading parameters, respectively. Further analysis, particularly on bit error rate via PDF-based approach, too is represented in closed form in terms of Meijer G-function and Fox H-bar-function for integer-order fading parameters, and extended Fox H-bar-function (H-hat) for non-integer-order fading parameters. The proposed results complement previous results that are either evolved in closed-form, or expressed in terms of infinite sums or higher order derivatives of the fading parameter m.

  19. On the Sum of Gamma Random Variates With Application to the Performance of Maximal Ratio Combining over Nakagami-m Fading Channels

    KAUST Repository

    Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim; Kucur, Oguz

    2012-01-01

    The probability distribution function (PDF) and cumulative density function of the sum of L independent but not necessarily identically distributed gamma variates, applicable to maximal ratio combining receiver outputs or in other words applicable to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels, is presented in closed form in terms of Meijer G-function and Fox H-bar-function for integer valued fading parameters and non-integer valued fading parameters, respectively. Further analysis, particularly on bit error rate via PDF-based approach, too is represented in closed form in terms of Meijer G-function and Fox H-bar-function for integer-order fading parameters, and extended Fox H-bar-function (H-hat) for non-integer-order fading parameters. The proposed results complement previous results that are either evolved in closed-form, or expressed in terms of infinite sums or higher order derivatives of the fading parameter m.

  20. Cumulative risk effects for the development of behaviour difficulties in children and adolescents with special educational needs and disabilities.

    Science.gov (United States)

    Oldfield, Jeremy; Humphrey, Neil; Hebron, Judith

    2015-01-01

    Research has identified multiple risk factors for the development of behaviour difficulties. What have been less explored are the cumulative effects of exposure to multiple risks on behavioural outcomes, with no study specifically investigating these effects within a population of young people with special educational needs and disabilities (SEND). Furthermore, it is unclear whether a threshold or linear risk model better fits the data for this population. The sample included 2660 children and 1628 adolescents with SEND. Risk factors associated with increases in behaviour difficulties over an 18-month period were summed to create a cumulative risk score, with this explanatory variable being added into a multi-level model. A quadratic term was then added to test the threshold model. There was evidence of a cumulative risk effect, suggesting that exposure to higher numbers of risk factors, regardless of their exact nature, resulted in increased behaviour difficulties. The relationship between risk and behaviour difficulties was non-linear, with exposure to increasing risk having a disproportionate and detrimental impact on behaviour difficulties in child and adolescent models. Interventions aimed at reducing behaviour difficulties need to consider the impact of multiple risk variables. Tailoring interventions towards those exposed to large numbers of risks would be advantageous. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Some Finite Sums Involving Generalized Fibonacci and Lucas Numbers

    Directory of Open Access Journals (Sweden)

    E. Kılıç

    2011-01-01

    Full Text Available By considering Melham's sums (Melham, 2004, we compute various more general nonalternating sums, alternating sums, and sums that alternate according to (−12+1 involving the generalized Fibonacci and Lucas numbers.

  2. Divergent Cumulative Cultural Evolution

    OpenAIRE

    Marriott, Chris; Chebib, Jobran

    2016-01-01

    Divergent cumulative cultural evolution occurs when the cultural evolutionary trajectory diverges from the biological evolutionary trajectory. We consider the conditions under which divergent cumulative cultural evolution can occur. We hypothesize that two conditions are necessary. First that genetic and cultural information are stored separately in the agent. Second cultural information must be transferred horizontally between agents of different generations. We implement a model with these ...

  3. Sum rules for nuclear collective excitations

    International Nuclear Information System (INIS)

    Bohigas, O.

    1978-07-01

    Characterizations of the response function and of integral properties of the strength function via a moment expansion are discussed. Sum rule expressions for the moments in the RPA are derived. The validity of these sum rules for both density independent and density dependent interactions is proved. For forces of the Skyrme type, analytic expressions for the plus one and plus three energy weighted sum rules are given for isoscalar monopole and quadrupole operators. From these, a close relationship between the monopole and quadrupole energies is shown and their dependence on incompressibility and effective mass is studied. The inverse energy weighted sum rule is computed numerically for the monopole operator, and an upper bound for the width of the monopole resonance is given. Finally the reliability of moments given by the RPA with effective interactions is discussed using simple soluble models for the hamiltonian, and also by comparison with experimental data

  4. Sum rules in the response function method

    International Nuclear Information System (INIS)

    Takayanagi, Kazuo

    1990-01-01

    Sum rules in the response function method are studied in detail. A sum rule can be obtained theoretically by integrating the imaginary part of the response function over the excitation energy with a corresponding energy weight. Generally, the response function is calculated perturbatively in terms of the residual interaction, and the expansion can be described by diagrammatic methods. In this paper, we present a classification of the diagrams so as to clarify which diagram has what contribution to which sum rule. This will allow us to get insight into the contributions to the sum rules of all the processes expressed by Goldstone diagrams. (orig.)

  5. Fault detection and diagnosis using statistical control charts and artificial neural networks

    International Nuclear Information System (INIS)

    Leger, R.P.; Garland, W.J.; Poehlman, W.F.S.

    1995-01-01

    In order to operate a successful plant or process, continuous improvement must be made in the areas of safety, quality and reliability. Central to this continuous improvement is the early or proactive detection and correct diagnosis of process faults. This research examines the feasibility of using Cumulative Summation (CUSUM) Control Charts and artificial neural networks together for fault detection and diagnosis (FDD). The proposed FDD strategy was tested on a model of the heat transport system of a CANDU nuclear reactor. The results of the investigation indicate that a FDD system using CUSUM Control Charts and a Radial Basis Function (RBF) neural network is not only feasible but also of promising potential. The control charts and neural network are linked together by using a characteristic fault signature pattern for each fault which is to be detected and diagnosed. When tested, the system was able to eliminate all false alarms at steady state, promptly detect 6 fault conditions and correctly diagnose 5 out of the 6 faults. The diagnosis for the sixth fault was inconclusive. (author). 9 refs., 6 tabs., 7 figs

  6. A Simulation-Based Study on the Comparison of Statistical and Time Series Forecasting Methods for Early Detection of Infectious Disease Outbreaks.

    Science.gov (United States)

    Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho

    2018-05-11

    Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.

  7. An improved segmentation-based HMM learning method for Condition-based Maintenance

    International Nuclear Information System (INIS)

    Liu, T; Lemeire, J; Cartella, F; Meganck, S

    2012-01-01

    In the domain of condition-based maintenance (CBM), persistence of machine states is a valid assumption. Based on this assumption, we present an improved Hidden Markov Model (HMM) learning algorithm for the assessment of equipment states. By a good estimation of initial parameters, more accurate learning can be achieved than by regular HMM learning methods which start with randomly chosen initial parameters. It is also better in avoiding getting trapped in local maxima. The data is segmented with a change-point analysis method which uses a combination of cumulative sum charts (CUSUM) and bootstrapping techniques. The method determines a confidence level that a state change happens. After the data is segmented, in order to label and combine the segments corresponding to the same states, a clustering technique is used based on a low-pass filter or root mean square (RMS) values of the features. The segments with their labelled hidden state are taken as 'evidence' to estimate the parameters of an HMM. Then, the estimated parameters are served as initial parameters for the traditional Baum-Welch (BW) learning algorithms, which are used to improve the parameters and train the model. Experiments on simulated and real data demonstrate that both performance and convergence speed is improved.

  8. Seasonal ARMA-based SPC charts for anomaly detection: Application to emergency department systems

    KAUST Repository

    Kadri, Farid; Harrou, Fouzi; Chaabane, Sondè s; Sun, Ying; Tahon, Christian

    2015-01-01

    Monitoring complex production systems is primordial to ensure management, reliability and safety as well as maintaining the desired product quality. Early detection of emergent abnormal behaviour in monitored systems allows pre-emptive action to prevent more serious consequences, to improve system operations and to reduce manufacturing and/or service costs. This study reports the design of a new methodology for the detection of abnormal situations based on the integration of time-series analysis models and statistical process control (SPC) tools for the joint development of a monitoring system to help supervising of the behaviour of emergency department services (EDs). The monitoring system developed is able to provide early alerts in the event of abnormal situations. The seasonal autoregressive moving average (SARMA)-based exponentially weighted moving average (EWMA) anomaly detection scheme proposed was successfully applied to the practical data collected from the database of the paediatric emergency department (PED) at Lille regional hospital centre, France. The method developed utilizes SARMA as a modelling framework and EWMA for anomaly detection. The EWMA control chart is applied to the uncorrelated residuals obtained from the SARMA model. The detection results of the EWMA chart are compared with two other commonly applied residual-based tests: a Shewhart individuals chart and a Cumulative Sum (CUSUM) control chart.

  9. Seasonal ARMA-based SPC charts for anomaly detection: Application to emergency department systems

    KAUST Repository

    Kadri, Farid

    2015-10-22

    Monitoring complex production systems is primordial to ensure management, reliability and safety as well as maintaining the desired product quality. Early detection of emergent abnormal behaviour in monitored systems allows pre-emptive action to prevent more serious consequences, to improve system operations and to reduce manufacturing and/or service costs. This study reports the design of a new methodology for the detection of abnormal situations based on the integration of time-series analysis models and statistical process control (SPC) tools for the joint development of a monitoring system to help supervising of the behaviour of emergency department services (EDs). The monitoring system developed is able to provide early alerts in the event of abnormal situations. The seasonal autoregressive moving average (SARMA)-based exponentially weighted moving average (EWMA) anomaly detection scheme proposed was successfully applied to the practical data collected from the database of the paediatric emergency department (PED) at Lille regional hospital centre, France. The method developed utilizes SARMA as a modelling framework and EWMA for anomaly detection. The EWMA control chart is applied to the uncorrelated residuals obtained from the SARMA model. The detection results of the EWMA chart are compared with two other commonly applied residual-based tests: a Shewhart individuals chart and a Cumulative Sum (CUSUM) control chart.

  10. Statistical fault diagnosis of wind turbine drivetrain applied to a 5MW floating wind turbine

    Science.gov (United States)

    Ghane, Mahdi; Nejad, Amir R.; Blanke, Mogens; Gao, Zhen; Moan, Torgeir

    2016-09-01

    Deployment of large scale wind turbine parks, in particular offshore, requires well organized operation and maintenance strategies to make it as competitive as the classical electric power stations. It is important to ensure systems are safe, profitable, and cost-effective. In this regards, the ability to detect, isolate, estimate, and prognose faults plays an important role. One of the critical wind turbine components is the gearbox. Failures in the gearbox are costly both due to the cost of the gearbox itself and also due to high repair downtime. In order to detect faults as fast as possible to prevent them to develop into failure, statistical change detection is used in this paper. The Cumulative Sum Method (CUSUM) is employed to detect possible defects in the downwind main bearing. A high fidelity gearbox model on a 5-MW spar-type wind turbine is used to generate data for fault-free and faulty conditions of the bearing at the rated wind speed and the associated wave condition. Acceleration measurements are utilized to find residuals used to indirectly detect damages in the bearing. Residuals are found to be nonGaussian, following a t-distribution with multivariable characteristic parameters. The results in this paper show how the diagnostic scheme can detect change with desired false alarm and detection probabilities.

  11. 'Sum rules' for preequilibrium reactions

    International Nuclear Information System (INIS)

    Hussein, M.S.

    1981-03-01

    Evidence that suggests a correct relationship between the optical transmission matrix, P, and the several correlation widths, gamma sub(n), found in nsmission matrix, P, and the several correlation widths, n, found in multistep compound (preequilibrium) nuclear reactions, is presented. A second sum rule is also derived within the shell model approach to nuclear reactions. Indications of the potential usefulness of the sum rules in preequilibrium studies are given. (Author) [pt

  12. QCD sum rules in a Bayesian approach

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2011-01-01

    A novel technique is developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. The main advantage of this approach lies in its ability of directly generating the spectral function of a given operator. This is done without the need of making an assumption about the specific functional form of the spectral function, such as in the 'pole + continuum' ansatz that is frequently used in QCD sum rule studies. Therefore, with this method it should in principle be possible to distinguish narrow pole structures form continuum states. To check whether meaningful results can be extracted within this approach, we have first investigated the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results exhibit a significant peak in the region of the experimentally observed ρ-meson mass, which agrees with earlier QCD sum rules studies and shows that the Maximum Entropy Method is a useful tool for analyzing QCD sum rules.

  13. Cumulative risk assessment for plasticizer-contaminated food using the hazard index approach

    International Nuclear Information System (INIS)

    Chang, J.W.; Yan, B.R.; Chang, M.H.; Tseng, S.H.; Kao, Y.M.; Chen, J.C.; Lee, C.C.

    2014-01-01

    Phthalates strongly and adversely affect reproduction, development and liver function. We did a cumulative risk assessment for simultaneous exposure to nine phthalates using the hazard index (HI) and the levels of nine phthalates in 1200 foodstuff samples. DEHP (di-2-ethylhexyl phthalate) present the highest level (mean: 0.443 mg/kg) in 1200 samples, and the highest average daily dose (ADD) was found in DEHP, ΣDBP (i + n) (the sum of dibutyl phthalate [DBP] isomers [DnBP + DiBP]) posed the highest risk potential of all the phthalates. In seven phthalates, the 95th percentiles of the ADDs for ΣDBP (i + n) in 0–6-yr-old children accounted for 91% (79–107%) of the tolerable daily intake, and the 95th percentiles of the HIs for the anti-androgenic effects of five phthalates in 0–3-yr-old children and 4–6-yr-old girls were >1. We conclude that the health of younger Taiwanese may be adversely affected by overexposure of phthalate-contaminated foods. - Graphical abstract: In seven phthalates, the 95th percentile of the average daily dose (ADD) for ΣDBP (i + n) (the sum of dibutyl phthalate [DBP] isomers [DnBP + DiBP]) in 0–3-yr-old male (0–3 M) and female (0–3 F) children accounted for 97% and 84% of TDIs, respectively. For 4–6-yr-old and 7–12-yr-old males and 7–12-yr-old females, ADDs for ΣDBP (i + n) accounted for 79%, 72%, and 65% of TDIs, respectively. - Highlights: • A cumulative risk assessment of PAEs was used in a severe plasticizer-contaminated food episode. • ΣDBP (i + n) posed the highest risk potential of all the dietary phthalates. • Females 4–6 yr old had the highest risk for anti-androgenic effects. • Beverages, milk and dairy products were the major contributors to average daily dose of phthalate esters. - The health of young Taiwanese may be adversely affected by overexposure of plasticizer-contaminated food

  14. Total hip arthroplasty by the direct anterior approach using a neck-preserving stem: Safety, efficacy and learning curve

    Directory of Open Access Journals (Sweden)

    Aditya Khemka

    2018-01-01

    Full Text Available Background: The concept of femoral neck preservation in total hip replacement (THR was introduced in 1993. It is postulated that retaining cortical bone of the femoral neck offers triplanar stability, uniform stress distribution, and accommodates physiological anteversion. However, data on safety, efficacy and learning curve are lacking. Materials and Methods: We prospectively assessed all patients who were operated for a THR with a short neck preserving stem (MiniHip between 2012 and 2014. The safety and learning curve were assessed by recording operative time; stem size; and adverse events including periprosthetic fracture; paresthesia; and limb length discrepancy (LLD. The cohort was divided into equal groups to assess the learning curve effect, and the cumulative sums (CUSUM test was performed to monitor intraoperative neck fractures. For assessment of efficacy, Oxford Hip Score (OHS and Short Form-36 (SF-36 scores were compared preoperatively and postoperatively. Results: 138 patients with median age 62 years (range 35–82 years were included with a median followup of 42 months (range 30–56 months. The minimum followup was 2.5 years. The OHS, SF-36 (physical and mental component scores improved by a mean score of 26, 28, and 27 points, respectively. All patients had LLD of <10 mm (1.9 mm ± 1.3. Adverse events included intraoperative neck fracture (n = 6, subsidence (n = 1, periprosthetic fracture (n = 1, paresthesia (n = 12, and trochanteric bursitis (n = 2. After early modification of the technique to use a smaller finishing broach, the CUSUM test demonstrated acceptable intraoperative neck fracture risk. The second surgery group had a reduced risk of intraoperative neck fracture (5/69 vs. 1/69 P = 0.2, reduced operative time (66 vs. 61 min, P = 0.06, and increased stem size (5 vs. 6, P = 0.09 although these differences were not statistically significant. Conclusions: The MiniHip stem is safe alternative to standard THR with good

  15. Harmonic sums, polylogarithms, special numbers, and their generalizations

    International Nuclear Information System (INIS)

    Ablinger, Jakob

    2013-04-01

    In these introductory lectures we discuss classes of presently known nested sums, associated iterated integrals, and special constants which hierarchically appear in the evaluation of massless and massive Feynman diagrams at higher loops. These quantities are elements of stuffle and shuffle algebras implying algebraic relations being widely independent of the special quantities considered. They are supplemented by structural relations. The generalizations are given in terms of generalized harmonic sums, (generalized) cyclotomic sums, and sums containing in addition binomial and inverse-binomial weights. To all these quantities iterated integrals and special numbers are associated. We also discuss the analytic continuation of nested sums of different kind to complex values of the external summation bound N.

  16. Harmonic sums, polylogarithms, special numbers, and their generalizations

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-04-15

    In these introductory lectures we discuss classes of presently known nested sums, associated iterated integrals, and special constants which hierarchically appear in the evaluation of massless and massive Feynman diagrams at higher loops. These quantities are elements of stuffle and shuffle algebras implying algebraic relations being widely independent of the special quantities considered. They are supplemented by structural relations. The generalizations are given in terms of generalized harmonic sums, (generalized) cyclotomic sums, and sums containing in addition binomial and inverse-binomial weights. To all these quantities iterated integrals and special numbers are associated. We also discuss the analytic continuation of nested sums of different kind to complex values of the external summation bound N.

  17. Novel Method of Weighting Cumulative Helmet Impacts Improves Correlation with Brain White Matter Changes After One Football Season of Sub-concussive Head Blows.

    Science.gov (United States)

    Merchant-Borna, Kian; Asselin, Patrick; Narayan, Darren; Abar, Beau; Jones, Courtney M C; Bazarian, Jeffrey J

    2016-12-01

    One football season of sub-concussive head blows has been shown to be associated with subclinical white matter (WM) changes on diffusion tensor imaging (DTI). Prior research analyses of helmet-based impact metrics using mean and peak linear and rotational acceleration showed relatively weak correlations to these WM changes; however, these analyses failed to account for the emerging concept that neuronal vulnerability to successive hits is inversely related to the time between hits (TBH). To develop a novel method for quantifying the cumulative effects of sub-concussive head blows during a single season of collegiate football by weighting helmet-based impact measures for time between helmet impacts. We further aim to compare correlations to changes in DTI after one season of collegiate football using weighted cumulative helmet-based impact measures to correlations using non-weighted cumulative helmet-based impact measures and non-cumulative measures. We performed a secondary analysis of DTI and helmet impact data collected on ten Division III collegiate football players during the 2011 season. All subjects underwent diffusion MR imaging before the start of the football season and within 1 week of the end of the football season. Helmet impacts were recorded at each practice and game using helmet-mounted accelerometers, which computed five helmet-based impact measures for each hit: linear acceleration (LA), rotational acceleration (RA), Gadd Severity Index (GSI), Head Injury Criterion (HIC 15 ), and Head Impact Technology severity profile (HITsp). All helmet-based impact measures were analyzed using five methods of summary: peak and mean (non-cumulative measures), season sum-totals (cumulative unweighted measures), and season sum-totals weighted for time between hits (TBH), the interval of time from hit to post-season DTI assessment (TUA), and both TBH and TUA combined. Summarized helmet-based impact measures were correlated to statistically significant changes in

  18. Transition sum rules in the shell model

    Science.gov (United States)

    Lu, Yi; Johnson, Calvin W.

    2018-03-01

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.

  19. The role of cumulative physical work load in symptomatic knee osteoarthritis – a case-control study in Germany

    Directory of Open Access Journals (Sweden)

    Abolmaali Nasreddin

    2008-07-01

    Full Text Available Abstract Objectives To examine the dose-response relationship between cumulative exposure to kneeling and squatting as well as to lifting and carrying of loads and symptomatic knee osteoarthritis (OA in a population-based case-control study. Methods In five orthopedic clinics and five practices we recruited 295 male patients aged 25 to 70 with radiographically confirmed knee osteoarthritis associated with chronic complaints. A total of 327 male control subjects were recruited. Data were gathered in a structured personal interview. To calculate cumulative exposure, the self-reported duration of kneeling and squatting as well as the duration of lifting and carrying of loads were summed up over the entire working life. Results The results of our study support a dose-response relationship between kneeling/squatting and symptomatic knee osteoarthritis. For a cumulative exposure to kneeling and squatting > 10.800 hours, the risk of having radiographically confirmed knee osteoarthritis as measured by the odds ratio (adjusted for age, region, weight, jogging/athletics, and lifting or carrying of loads is 2.4 (95% CI 1.1–5.0 compared to unexposed subjects. Lifting and carrying of loads is significantly associated with knee osteoarthritis independent of kneeling or similar activities. Conclusion As the knee osteoarthritis risk is strongly elevated in occupations that involve both kneeling/squatting and heavy lifting/carrying, preventive efforts should particularly focus on these "high-risk occupations".

  20. Cumulative risk, cumulative outcome: a 20-year longitudinal study.

    Directory of Open Access Journals (Sweden)

    Leslie Atkinson

    Full Text Available Cumulative risk (CR models provide some of the most robust findings in the developmental literature, predicting numerous and varied outcomes. Typically, however, these outcomes are predicted one at a time, across different samples, using concurrent designs, longitudinal designs of short duration, or retrospective designs. We predicted that a single CR index, applied within a single sample, would prospectively predict diverse outcomes, i.e., depression, intelligence, school dropout, arrest, smoking, and physical disease from childhood to adulthood. Further, we predicted that number of risk factors would predict number of adverse outcomes (cumulative outcome; CO. We also predicted that early CR (assessed at age 5/6 explains variance in CO above and beyond that explained by subsequent risk (assessed at ages 12/13 and 19/20. The sample consisted of 284 individuals, 48% of whom were diagnosed with a speech/language disorder. Cumulative risk, assessed at 5/6-, 12/13-, and 19/20-years-old, predicted aforementioned outcomes at age 25/26 in every instance. Furthermore, number of risk factors was positively associated with number of negative outcomes. Finally, early risk accounted for variance beyond that explained by later risk in the prediction of CO. We discuss these findings in terms of five criteria posed by these data, positing a "mediated net of adversity" model, suggesting that CR may increase some central integrative factor, simultaneously augmenting risk across cognitive, quality of life, psychiatric and physical health outcomes.

  1. Fixed mass and scaling sum rules

    International Nuclear Information System (INIS)

    Ward, B.F.L.

    1975-01-01

    Using the correspondence principle (continuity in dynamics), the approach of Keppell-Jones-Ward-Taha to fixed mass and scaling current algebraic sum rules is extended so as to consider explicitly the contributions of all classes of intermediate states. A natural, generalized formulation of the truncation ideas of Cornwall, Corrigan, and Norton is introduced as a by-product of this extension. The formalism is illustrated in the familiar case of the spin independent Schwinger term sum rule. New sum rules are derived which relate the Regge residue functions of the respective structure functions to their fixed hadronic mass limits for q 2 → infinity. (Auth.)

  2. Secant cumulants and toric geometry

    NARCIS (Netherlands)

    Michalek, M.; Oeding, L.; Zwiernik, P.W.

    2012-01-01

    We study the secant line variety of the Segre product of projective spaces using special cumulant coordinates adapted for secant varieties. We show that the secant variety is covered by open normal toric varieties. We prove that in cumulant coordinates its ideal is generated by binomial quadrics. We

  3. Shapley Value for Constant-sum Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.B.

    2002-01-01

    It is proved that Young's axiomatization for the Shapley value by marginalism, efficiency, and symmetry is still valid for the Shapley value defined on the class of nonnegative constant-sum games and on the entire class of constant-sum games as well. To support an interest to study the class of

  4. On the sum of squared η-μ random variates with application to the performance of wireless communication systems

    KAUST Repository

    Ansari, Imran Shafique

    2013-06-01

    The probability density function (PDF) and cumulative distribution function of the sum of L independent but not necessarily identically distributed squared η-μ variates, applicable to the output statistics of maximal ratio combining (MRC) receiver operating over η-μ fading channels that includes the Hoyt and the Nakagami-m models as special cases, is presented in closed-form in terms of the Fox\\'s H function. Further analysis, particularly on the bit error rate via PDF-based approach, is also represented in closed form in terms of the extended Fox\\'s H function (H). The proposed new analytical results complement previous results and are illustrated by extensive numerical and Monte Carlo simulation results. © 2013 IEEE.

  5. A Bayesian analysis of QCD sum rules

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2011-01-01

    A new technique has recently been developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. This approach has the virtue of being able to directly generate the spectral function of a given operator, without the need of making an assumption about its specific functional form. To investigate whether useful results can be extracted within this method, we have first studied the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results show a significant peak in the region of the experimentally observed ρ-meson mass, which is in agreement with earlier QCD sum rules studies and suggests that the Maximum Entropy Method is a strong tool for analyzing QCD sum rules.

  6. SUMS Counts-Related Projects

    Data.gov (United States)

    Social Security Administration — Staging Instance for all SUMs Counts related projects including: Redeterminations/Limited Issue, Continuing Disability Resolution, CDR Performance Measures, Initial...

  7. 7 CFR 1726.205 - Multiparty lump sum quotations.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Multiparty lump sum quotations. 1726.205 Section 1726....205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of... basis of written lump sum quotations, the borrower will select the supplier or contractor based on the...

  8. Cumulative Culture and Future Thinking: Is Mental Time Travel a Prerequisite to Cumulative Cultural Evolution?

    Science.gov (United States)

    Vale, G. L.; Flynn, E. G.; Kendal, R. L.

    2012-01-01

    Cumulative culture denotes the, arguably, human capacity to build on the cultural behaviors of one's predecessors, allowing increases in cultural complexity to occur such that many of our cultural artifacts, products and technologies have progressed beyond what a single individual could invent alone. This process of cumulative cultural evolution…

  9. Adler Function, DIS sum rules and Crewther Relations

    International Nuclear Information System (INIS)

    Baikov, P.A.; Chetyrkin, K.G.; Kuehn, J.H.

    2010-01-01

    The current status of the Adler function and two closely related Deep Inelastic Scattering (DIS) sum rules, namely, the Bjorken sum rule for polarized DIS and the Gross-Llewellyn Smith sum rule are briefly reviewed. A new result is presented: an analytical calculation of the coefficient function of the latter sum rule in a generic gauge theory in order O(α s 4 ). It is demonstrated that the corresponding Crewther relation allows to fix two of three colour structures in the O(α s 4 ) contribution to the singlet part of the Adler function.

  10. Sum rules for neutrino oscillations

    International Nuclear Information System (INIS)

    Kobzarev, I.Yu.; Martemyanov, B.V.; Okun, L.B.; Schepkin, M.G.

    1981-01-01

    Sum rules for neutrino oscillations are obtained. The derivation of the general form of the s matrix for two stage process lsub(i)sup(-)→ν→lsub(k)sup(+-) (where lsub(i)sup(-)e, μ, tau, ... are initial leptons with flavor i and lsub(k)sup(+-) is final lepton) is presented. The consideration of two stage process lsub(i)sup(-)→ν→lsub(k)sup(+-) gives the possibility to take into account neutrino masses and to obtain the expressions for the oscillating cross sections. In the case of Dirac and left-handed Majorana neutrino is obtained the sum rule for the quantities 1/Vsub(K)σ(lsub(i)sup(-)→lsub(K)sup(+-)), (where Vsub(K) is a velocity of lsub(K)). In the left-handed Majorana neutrino case there is an additional antineutrino admixture leading to lsub(i)sup(-)→lsub(K)sup(+) process. Both components (neutrino and antineutrino) oscillate independently. The sums Σsub(K)1/Vsub(k)σ(lsub(i)sup(-) - lsub(K)sup(+-) then oscillate due to the presence of left-handed antineutrinos and right-handed neutrinos which do not take part in weak interactions. If right-handed currents are added sum rules analogous to considered above may be obtained. All conclusions are valid in the general case when CP is not conserved [ru

  11. Succinct partial sums and fenwick trees

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Prezza, Nicola

    2017-01-01

    We consider the well-studied partial sums problem in succint space where one is to maintain an array of n k-bit integers subject to updates such that partial sums queries can be efficiently answered. We present two succint versions of the Fenwick Tree – which is known for its simplicity...... and practicality. Our results hold in the encoding model where one is allowed to reuse the space from the input data. Our main result is the first that only requires nk + o(n) bits of space while still supporting sum/update in O(logbn)/O(blogbn) time where 2 ≤ b ≤ log O(1)n. The second result shows how optimal...... time for sum/update can be achieved while only slightly increasing the space usage to nk + o(nk) bits. Beyond Fenwick Trees, the results are primarily based on bit-packing and sampling – making them very practical – and they also allow for simple optimal parallelization....

  12. Isospin sum rules for inclusive cross-sections

    NARCIS (Netherlands)

    Rotelli, P.; Suttorp, L.G.

    1972-01-01

    A systematic analysis of isospin sum rules is presented for the distribution functions of strong, electromagnetic weak inclusive processes. The general expression for these sum rules is given and some new examples are presented.

  13. Gauss Sum Factorization with Cold Atoms

    International Nuclear Information System (INIS)

    Gilowski, M.; Wendrich, T.; Mueller, T.; Ertmer, W.; Rasel, E. M.; Jentsch, Ch.; Schleich, W. P.

    2008-01-01

    We report the first implementation of a Gauss sum factorization algorithm by an internal state Ramsey interferometer using cold atoms. A sequence of appropriately designed light pulses interacts with an ensemble of cold rubidium atoms. The final population in the involved atomic levels determines a Gauss sum. With this technique we factor the number N=263193

  14. Where Does Latin "Sum" Come From?

    Science.gov (United States)

    Nyman, Martti A.

    1977-01-01

    The derivation of Latin "sum,""es(s),""est" from Indo-European "esmi,""est,""esti" involves methodological problems. It is claimed here that the development of "sum" from "esmi" is related to the origin of the variation "est-st" (less than"esti"). The study is primarily concerned with this process, but chronological suggestions are also made. (CHK)

  15. The End of Academia?: From "Cogito Ergo Sum" to "Consumo Ergo Sum" Germany and Malaysia in Comparison

    Science.gov (United States)

    Lim, Kim-Hui,; Har, Wai-Mun

    2008-01-01

    The lack of academic and thinking culture is getting more worried and becomes a major challenge to our academia society this 21st century. Few directions that move academia from "cogito ergo sum" to "consumo ergo sum" are actually leading us to "the end of academia". Those directions are: (1) the death of dialectic;…

  16. Gottfried sum rule and mesonic exchanges in deuteron

    International Nuclear Information System (INIS)

    Kaptari, L.P.

    1991-01-01

    Recent NMC data on the experimental value of the Gottfried Sum are discussed. It is shown that the Gottfried Sum is sensitive to the nuclear structure corrections, viz. themesonic exchanges and binding effects. A new estimation of the Gottfried Sum is given. The obtained result is close to the quark-parton prediction of 1/3. 11 refs.; 2 figs

  17. Statistical sums of strings on hyperellyptic surfaces

    International Nuclear Information System (INIS)

    Lebedev, D.; Morozov, A.

    1987-01-01

    Contributions of hyperellyptic surfaces to statistical sums of string theories are presented. Available results on hyperellyptic surface give the apportunity to check factorization of three-loop statsum. Some remarks on the vanishing statistical sum are presented

  18. System-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, NEWTONP, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), used independently of one another. Program finds probability required to yield given system reliability. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  19. A new generalization of Hardy–Berndt sums

    Indian Academy of Sciences (India)

    4,11,18]. Berndt and Goldberg [4] found analytic properties of these sums and established infinite trigonometric series representations for them. The most important properties of Hardy–. Berndt sums are reciprocity theorems due to Berndt [3] ...

  20. Cumulative human impacts on marine predators.

    Science.gov (United States)

    Maxwell, Sara M; Hazen, Elliott L; Bograd, Steven J; Halpern, Benjamin S; Breed, Greg A; Nickel, Barry; Teutschel, Nicole M; Crowder, Larry B; Benson, Scott; Dutton, Peter H; Bailey, Helen; Kappes, Michelle A; Kuhn, Carey E; Weise, Michael J; Mate, Bruce; Shaffer, Scott A; Hassrick, Jason L; Henry, Robert W; Irvine, Ladd; McDonald, Birgitte I; Robinson, Patrick W; Block, Barbara A; Costa, Daniel P

    2013-01-01

    Stressors associated with human activities interact in complex ways to affect marine ecosystems, yet we lack spatially explicit assessments of cumulative impacts on ecologically and economically key components such as marine predators. Here we develop a metric of cumulative utilization and impact (CUI) on marine predators by combining electronic tracking data of eight protected predator species (n=685 individuals) in the California Current Ecosystem with data on 24 anthropogenic stressors. We show significant variation in CUI with some of the highest impacts within US National Marine Sanctuaries. High variation in underlying species and cumulative impact distributions means that neither alone is sufficient for effective spatial management. Instead, comprehensive management approaches accounting for both cumulative human impacts and trade-offs among multiple stressors must be applied in planning the use of marine resources.

  1. Zero-sum bias: perceived competition despite unlimited resources.

    Science.gov (United States)

    Meegan, Daniel V

    2010-01-01

    Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party) when it is actually non-zero-sum. The experimental participants were students at a university where students' grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.

  2. A bayesian approach to QCD sum rules

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2010-01-01

    QCD sum rules are analyzed with the help of the Maximum Entropy Method. We develop a new technique based on the Bayesion inference theory, which allows us to directly obtain the spectral function of a given correlator from the results of the operator product expansion given in the deep euclidean 4-momentum region. The most important advantage of this approach is that one does not have to make any a priori assumptions about the functional form of the spectral function, such as the 'pole + continuum' ansatz that has been widely used in QCD sum rule studies, but only needs to specify the asymptotic values of the spectral function at high and low energies as an input. As a first test of the applicability of this method, we have analyzed the sum rules of the ρ-meson, a case where the sum rules are known to work well. Our results show a clear peak structure in the region of the experimental mass of the ρ-meson. We thus demonstrate that the Maximum Entropy Method is successfully applied and that it is an efficient tool in the analysis of QCD sum rules. (author)

  3. The Eccentric-distance Sum of Some Graphs

    OpenAIRE

    P, Padmapriya; Mathad, Veena

    2017-01-01

    Let $G = (V,E)$ be a simple connected graph. Theeccentric-distance sum of $G$ is defined as$\\xi^{ds}(G) =\\ds\\sum_{\\{u,v\\}\\subseteq V(G)} [e(u)+e(v)] d(u,v)$, where $e(u)$ %\\dsis the eccentricity of the vertex $u$ in $G$ and $d(u,v)$ is thedistance between $u$ and $v$. In this paper, we establish formulaeto calculate the eccentric-distance sum for some graphs, namelywheel, star, broom, lollipop, double star, friendship, multi-stargraph and the join of $P_{n-2}$ and $P_2$.

  4. The eccentric-distance sum of some graphs

    Directory of Open Access Journals (Sweden)

    Padmapriya P

    2017-04-01

    Full Text Available Let $G = (V,E$ be a simple connected graph. Theeccentric-distance sum of $G$ is defined as$\\xi^{ds}(G =\\ds\\sum_{\\{u,v\\}\\subseteq V(G} [e(u+e(v] d(u,v$, where $e(u$ %\\dsis the eccentricity of the vertex $u$ in $G$ and $d(u,v$ is thedistance between $u$ and $v$. In this paper, we establish formulaeto calculate the eccentric-distance sum for some graphs, namelywheel, star, broom, lollipop, double star, friendship, multi-stargraph and the join of $P_{n-2}$ and $P_2$.

  5. Common-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest, M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CROSSER, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), used independently of one another. Point of equality between reliability of system and common reliability of components found. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  6. Cumulative effects assessment: Does scale matter?

    International Nuclear Information System (INIS)

    Therivel, Riki; Ross, Bill

    2007-01-01

    Cumulative effects assessment (CEA) is (or should be) an integral part of environmental assessment at both the project and the more strategic level. CEA helps to link the different scales of environmental assessment in that it focuses on how a given receptor is affected by the totality of plans, projects and activities, rather than on the effects of a particular plan or project. This article reviews how CEAs consider, and could consider, scale issues: spatial extent, level of detail, and temporal issues. It is based on an analysis of Canadian project-level CEAs and UK strategic-level CEAs. Based on a review of literature and, especially, case studies with which the authors are familiar, it concludes that scale issues are poorly considered at both levels, with particular problems being unclear or non-existing cumulative effects scoping methodologies; poor consideration of past or likely future human activities beyond the plan or project in question; attempts to apportion 'blame' for cumulative effects; and, at the plan level, limited management of cumulative effects caused particularly by the absence of consent regimes. Scale issues are important in most of these problems. However both strategic-level and project-level CEA have much potential for managing cumulative effects through better siting and phasing of development, demand reduction and other behavioural changes, and particularly through setting development consent rules for projects. The lack of strategic resource-based thresholds constrains the robust management of strategic-level cumulative effects

  7. QCD sum rules and applications to nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, T D [Maryland Univ., College Park, MD (United States). Dept. of Physics; [Washington Univ., Seattle, WA (United States). Dept. of Physics and Inst. for Nuclear Theory; Furnstahl, R J [Ohio State Univ., Columbus, OH (United States). Dept. of Physics; Griegel, D K [Maryland Univ., College Park, MD (United States). Dept. of Physics; [TRIUMF, Vancouver, BC (Canada); Xuemin, J

    1994-12-01

    Applications of QCD sum-rule methods to the physics of nuclei are reviewed, with an emphasis on calculations of baryon self-energies in infinite nuclear matter. The sum-rule approach relates spectral properties of hadrons propagating in the finite-density medium, such as optical potentials for quasinucleons, to matrix elements of QCD composite operators (condensates). The vacuum formalism for QCD sum rules is generalized to finite density, and the strategy and implementation of the approach is discussed. Predictions for baryon self-energies are compared to those suggested by relativistic nuclear physics phenomenology. Sum rules for vector mesons in dense nuclear matter are also considered. (author). 153 refs., 8 figs.

  8. QCD sum rules and applications to nuclear physics

    International Nuclear Information System (INIS)

    Cohen, T.D.; Xuemin, J.

    1994-12-01

    Applications of QCD sum-rule methods to the physics of nuclei are reviewed, with an emphasis on calculations of baryon self-energies in infinite nuclear matter. The sum-rule approach relates spectral properties of hadrons propagating in the finite-density medium, such as optical potentials for quasinucleons, to matrix elements of QCD composite operators (condensates). The vacuum formalism for QCD sum rules is generalized to finite density, and the strategy and implementation of the approach is discussed. Predictions for baryon self-energies are compared to those suggested by relativistic nuclear physics phenomenology. Sum rules for vector mesons in dense nuclear matter are also considered. (author)

  9. Cumulative cultural learning: Development and diversity

    Science.gov (United States)

    2017-01-01

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children’s learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission—the cornerstone of human cultural diversity. PMID:28739945

  10. Cumulative cultural learning: Development and diversity.

    Science.gov (United States)

    Legare, Cristine H

    2017-07-24

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children's learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission-the cornerstone of human cultural diversity.

  11. Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...

  12. On poisson-stopped-sums that are mixed poisson

    OpenAIRE

    Valero Baya, Jordi; Pérez Casany, Marta; Ginebra Molins, Josep

    2013-01-01

    Maceda (1948) characterized the mixed Poisson distributions that are Poisson-stopped-sum distributions based on the mixing distribution. In an alternative characterization of the same set of distributions here the Poisson-stopped-sum distributions that are mixed Poisson distributions is proved to be the set of Poisson-stopped-sums of either a mixture of zero-truncated Poisson distributions or a zero-modification of it. Peer Reviewed

  13. Volume sums of polar Blaschke–Minkowski homomorphisms

    Indian Academy of Sciences (India)

    In this article, we establish Minkowski and Aleksandrov–Fenchel type inequalities for the volume sum of polars of Blaschke–Minkowski homomorphisms. Keywords. Blaschke–Minkowski homomorphism; volume differences; volume sum; projection body operator. 2010 Mathematics Subject Classification. 52A40, 52A30. 1.

  14. Gaussian sum rules for optical functions

    International Nuclear Information System (INIS)

    Kimel, I.

    1981-12-01

    A new (Gaussian) type of sum rules (GSR) for several optical functions, is presented. The functions considered are: dielectric permeability, refractive index, energy loss function, rotatory power and ellipticity (circular dichroism). While reducing to the usual type of sum rules in a certain limit, the GSR contain in general, a Gaussian factor that serves to improve convergence. GSR might be useful in analysing experimental data. (Author) [pt

  15. The Gross-Llewellyn Smith sum rule

    International Nuclear Information System (INIS)

    Scott, W.G.

    1981-01-01

    We present the most recent data on the Gross-Llewellyn Smith sum rule obtained from the combined BEBC Narrow Band Neon and GGM-PS Freon neutrino/antineutrino experiments. The data for the Gross-Llewellyn Smith sum rule as a function of q 2 suggest a smaller value for the QCD coupling constant parameter Λ than is obtained from the analysis of the higher moments. (author)

  16. Zero-sum bias: perceived competition despite unlimited resources

    Directory of Open Access Journals (Sweden)

    Daniel V Meegan

    2010-11-01

    Full Text Available Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party when it is actually non-zero-sum. The experimental participants were students at a university where students’ grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.

  17. Compound sums and their applications in finance

    NARCIS (Netherlands)

    R. Helmers (Roelof); B. Tarigan

    2003-01-01

    textabstractCompound sums arise frequently in insurance (total claim size in a portfolio) and in accountancy (total error amount in audit populations). As the normal approximation for compound sums usually performs very badly, one may look for better methods for approximating the distribution of a

  18. Superconvergent sum rules for the normal reflectivity

    International Nuclear Information System (INIS)

    Furuya, K.; Zimerman, A.H.; Villani, A.

    1976-05-01

    Families of superconvergent relations for the normal reflectivity function are written. Sum rules connecting the difference of phases of the reflectivities of two materials are also considered. Finally superconvergence relations and sum rules for magneto-reflectivity in the Faraday and Voigt regimes are also studied

  19. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  20. Singular f-sum rule for superfluid 4He

    International Nuclear Information System (INIS)

    Wong, V.K.

    1979-01-01

    The validity and applicability to inelastic neutron scattering of a singular f-sum rule for superfluid helium, proposed by Griffin to explain the rhosub(s) dependence in S(k, ω) as observed by Woods and Svensson, are examined in the light of similar sum rules rigorously derived for anharmonic crystals and Bose liquids. It is concluded that the singular f-sum rules are only of microscopic interest. (Auth,)

  1. Cumulative human impacts on marine predators

    DEFF Research Database (Denmark)

    Maxwell, Sara M; Hazen, Elliott L; Bograd, Steven J

    2013-01-01

    Stressors associated with human activities interact in complex ways to affect marine ecosystems, yet we lack spatially explicit assessments of cumulative impacts on ecologically and economically key components such as marine predators. Here we develop a metric of cumulative utilization and impact...

  2. Lattice QCD evaluation of baryon magnetic moment sum rules

    International Nuclear Information System (INIS)

    Leinweber, D.B.

    1991-05-01

    Magnetic moment combinations and sum rules are evaluated using recent results for the magnetic moments of octet baryons determined in a numerical simulation of quenched QCD. The model-independent and parameter-free results of the lattice calculations remove some of the confusion and contradiction surrounding past magnetic moment sum rule analyses. The lattice results reveal the underlying quark dynamics investigated by magnetic moment sum rules and indicate the origin of magnetic moment quenching for the non-strange quarks in Σ. In contrast to previous sum rule analyses, the magnetic moments of nonstrange quarks in Ξ are seen to be enhanced in the lattice results. In most cases, the spin-dependent dynamics and center-of-mass effects giving rise to baryon dependence of the quark moments are seen to be sufficient to violate the sum rules in agreement with experimental measurements. In turn, the sum rules are used to further examine the results of the lattice simulation. The Sachs sum rule suggests that quark loop contributions not included in present lattice calculations may play a key role in removing the discrepancies between lattice and experimental ratios of magnetic moments. This is supported by other sum rules sensitive to quark loop contributions. A measure of the isospin symmetry breaking in the effective quark moments due to quark loop contributions is in agreement with model expectations. (Author) 16 refs., 2 figs., 2 tabs

  3. Unidirectional ring-laser operation using sum-frequency mixing

    DEFF Research Database (Denmark)

    Tidemand-Lichtenberg, Peter; Cheng, Haynes Pak Hay; Pedersen, Christian

    2010-01-01

    A technique enforcing unidirectional operation of ring lasers is proposed and demonstrated. The approach relies on sum-frequency mixing between a single-pass laser and one of the two counterpropagating intracavity fields of the ring laser. Sum-frequency mixing introduces a parametric loss for the...... where lossless second-order nonlinear materials are available. Numerical modeling and experimental demonstration of parametric-induced unidirectional operation of a diode-pumped solid-state 1342 nm cw ring laser are presented.......A technique enforcing unidirectional operation of ring lasers is proposed and demonstrated. The approach relies on sum-frequency mixing between a single-pass laser and one of the two counterpropagating intracavity fields of the ring laser. Sum-frequency mixing introduces a parametric loss...

  4. Chiral corrections to the Adler-Weisberger sum rule

    Science.gov (United States)

    Beane, Silas R.; Klco, Natalie

    2016-12-01

    The Adler-Weisberger sum rule for the nucleon axial-vector charge, gA , offers a unique signature of chiral symmetry and its breaking in QCD. Its derivation relies on both algebraic aspects of chiral symmetry, which guarantee the convergence of the sum rule, and dynamical aspects of chiral symmetry breaking—as exploited using chiral perturbation theory—which allow the rigorous inclusion of explicit chiral symmetry breaking effects due to light-quark masses. The original derivations obtained the sum rule in the chiral limit and, without the benefit of chiral perturbation theory, made various attempts at extrapolating to nonvanishing pion masses. In this paper, the leading, universal, chiral corrections to the chiral-limit sum rule are obtained. Using PDG data, a recent parametrization of the pion-nucleon total cross sections in the resonance region given by the SAID group, as well as recent Roy-Steiner equation determinations of subthreshold amplitudes, threshold parameters, and correlated low-energy constants, the Adler-Weisberger sum rule is confronted with experimental data. With uncertainty estimates associated with the cross-section parametrization, the Goldberger-Treimann discrepancy, and the truncation of the sum rule at O (Mπ4) in the chiral expansion, this work finds gA=1.248 ±0.010 ±0.007 ±0.013 .

  5. An analysis of cumulative risks based on biomonitoring data for six phthalates using the Maximum Cumulative Ratio

    Science.gov (United States)

    The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single chemical drives the cumulative risk of an individual exposed to multiple chemicals. Phthalates are a class of chemicals with ubiquitous exposures in the general population that have the potential to cause ...

  6. Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms

    International Nuclear Information System (INIS)

    Ablinger, Jakob; Schneider, Carsten

    2013-01-01

    In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from ±1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincare iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation w.r.t. the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.

  7. Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-01-15

    In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from {+-}1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincare iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation w.r.t. the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.

  8. Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040, Linz (Austria); Blümlein, Johannes [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2013-08-15

    In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from ±1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincaré iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation with respect to the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.

  9. Premium Pricing of Liability Insurance Using Random Sum Model

    Directory of Open Access Journals (Sweden)

    Mujiati Dwi Kartikasari

    2017-03-01

    Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution

  10. Evaluation of the multi-sums for large scale problems

    International Nuclear Information System (INIS)

    Bluemlein, J.; Hasselhuhn, A.; Schneider, C.

    2012-02-01

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter ε can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  11. Evaluation of the multi-sums for large scale problems

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.; Hasselhuhn, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation

    2012-02-15

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter {epsilon} can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  12. An electrophysiological signature of summed similarity in visual working memory.

    Science.gov (United States)

    van Vugt, Marieke K; Sekuler, Robert; Wilson, Hugh R; Kahana, Michael J

    2013-05-01

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth electrode recordings and scalp electroencephalography. On each experimental trial, participants judged whether a test face had been among a small set of recently studied faces. Consistent with summed-similarity theory, participants' tendency to endorse a test item increased as a function of its summed similarity to the items on the just-studied list. To characterize this behavioral effect of summed similarity, we successfully fit a summed-similarity model to individual participant data from each experiment. Using the parameters determined from fitting the summed-similarity model to the behavioral data, we examined the relation between summed similarity and brain activity. We found that 4-9 Hz theta activity in the medial temporal lobe and 2-4 Hz delta activity recorded from frontal and parietal cortices increased with summed similarity. These findings demonstrate direct neural correlates of the similarity computations that form the foundation of several major cognitive theories of human recognition memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Extremal extensions for the sum of nonnegative selfadjoint relations

    NARCIS (Netherlands)

    Hassi, Seppo; Sandovici, Adrian; De Snoo, Henk; Winkler, Henrik

    2007-01-01

    The sum A + B of two nonnegative selfadjoint relations (multivalued operators) A and B is a nonnegative relation. The class of all extremal extensions of the sum A + B is characterized as products of relations via an auxiliary Hilbert space associated with A and B. The so-called form sum extension

  14. Compound Poisson Approximations for Sums of Random Variables

    OpenAIRE

    Serfozo, Richard F.

    1986-01-01

    We show that a sum of dependent random variables is approximately compound Poisson when the variables are rarely nonzero and, given they are nonzero, their conditional distributions are nearly identical. We give several upper bounds on the total-variation distance between the distribution of such a sum and a compound Poisson distribution. Included is an example for Markovian occurrences of a rare event. Our bounds are consistent with those that are known for Poisson approximations for sums of...

  15. Chapter 19. Cumulative watershed effects and watershed analysis

    Science.gov (United States)

    Leslie M. Reid

    1998-01-01

    Cumulative watershed effects are environmental changes that are affected by more than.one land-use activity and that are influenced by.processes involving the generation or transport.of water. Almost all environmental changes are.cumulative effects, and almost all land-use.activities contribute to cumulative effects

  16. An Analysis of Cumulative Risks Indicated by Biomonitoring Data of Six Phthalates Using the Maximum Cumulative Ratio

    Science.gov (United States)

    The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single component of a chemical mixture drives the cumulative risk of a receptor.1 This study used the MCR, the Hazard Index (HI) and Hazard Quotient (HQ) to evaluate co-exposures to six phthalates using biomonito...

  17. 28 CFR 523.16 - Lump sum awards.

    Science.gov (United States)

    2010-07-01

    ... satisfactory performance of an unusually hazardous assignment; (c) An act which protects the lives of staff or... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend... award is calculated. No seniority is accrued for such awards. Staff may recommend lump sum awards of...

  18. Systematics of strength function sum rules

    Directory of Open Access Journals (Sweden)

    Calvin W. Johnson

    2015-11-01

    Full Text Available Sum rules provide useful insights into transition strength functions and are often expressed as expectation values of an operator. In this letter I demonstrate that non-energy-weighted transition sum rules have strong secular dependences on the energy of the initial state. Such non-trivial systematics have consequences: the simplification suggested by the generalized Brink–Axel hypothesis, for example, does not hold for most cases, though it weakly holds in at least some cases for electric dipole transitions. Furthermore, I show the systematics can be understood through spectral distribution theory, calculated via traces of operators and of products of operators. Seen through this lens, violation of the generalized Brink–Axel hypothesis is unsurprising: one expects sum rules to evolve with excitation energy. Furthermore, to lowest order the slope of the secular evolution can be traced to a component of the Hamiltonian being positive (repulsive or negative (attractive.

  19. Vacuum structure and QCD sum rules

    International Nuclear Information System (INIS)

    Shifman, M.A.

    1992-01-01

    The method of the QCD sum rules was and still is one of the most productive tools in a wide range of problems associated with the hadronic phenomenology. Many heuristic ideas, computational devices, specific formulae which are useful to theorists working not only in hadronic physics, have been accumulated in this method. Some of the results and approaches which have originally been developed in connection with the QCD sum rules can be and are successfully applied in related fields, as supersymmetric gauge theories, nontraditional schemes of quarks and leptons, etc. The amount of literature on these and other more basic problems in hadronic physics has grown enormously in recent years. This volume presents a collection of papers which provide an overview of all basic elements of the sum rule approach and priority has been given to the works which seemed most useful from a pedagogical point of view

  20. Vacuum structure and QCD sum rules

    International Nuclear Information System (INIS)

    Shifman, M.A.

    1992-01-01

    The method of the QCD sum rules was and still is one of the most productive tools in a wide range of problems associated with the hadronic phenomenology. Many heuristic ideas, computational devices, specific formulae which are useful to theorists working not only in hadronic physics, have been accumulated in this method. Some of the results and approaches which have been originally developed in connection with the QCD sum rules can be and are successfully applied in related fields, such as supersymmetric gauge theories, nontraditional schemes of quarks and leptons, etc. The amount of literature on these and other more basic problems in hadronic physics has grown enormously in recent years. This collection of papers provides an overview of all basic elements of the sum rule approach. Priority has been given to those works which seemed most useful from a pedagogical point of view

  1. Moments of the weighted sum-of-digits function | Larcher ...

    African Journals Online (AJOL)

    The weighted sum-of-digits function is a slight generalization of the well known sum-of-digits function with the difference that here the digits are weighted by some weights. So for example in this concept also the alternated sum-of-digits function is included. In this paper we compute the first and the second moment of the ...

  2. Managing cumulative impacts: A key to sustainability?

    Energy Technology Data Exchange (ETDEWEB)

    Hunsaker, C.T.

    1994-12-31

    This paper addresses how science can be more effectively used in creating policy to manage cumulative effects on ecosystems. The paper focuses on the scientific techniques that we have to identify and to assess cumulative impacts on ecosystems. The term ``sustainable development`` was brought into common use by the World Commission on Environment and Development (The Brundtland Commission) in 1987. The Brundtland Commission report highlighted the need to simultaneously address developmental and environmental imperatives simultaneously by calling for development that ``meets the needs of the present generation without compromising the needs of future generations.`` We cannot claim to be working toward sustainable development until we can quantitatively assess cumulative impacts on the environment: The two concepts are inextricibally linked in that the elusiveness of cumulative effects likely has the greatest potential of keeping us from achieving sustainability. In this paper, assessment and management frameworks relevant to cumulative impacts are discussed along with recent literature on how to improve such assessments. When possible, examples are given for marine ecosystems.

  3. Development of the modified sum-peak method and its application

    International Nuclear Information System (INIS)

    Ogata, Y.; Miyahara, H.; Ishihara, M.; Ishigure, N.; Yamamoto, S.; Kojima, S.

    2016-01-01

    As the sum-peak method requires the total count rate as well as the peak count rates and the sum peak count rate, this meets difficulties when a sample contains other radionuclides than the one to be measured. To solve the problem, a new method using solely the peak and the sum peak count rates was developed. The method was theoretically and experimentally confirmed using "6"0Co, "2"2Na and "1"3"4Cs. We demonstrate that the modified sum-peak method is quite simple and practical and is useful to measure multiple nuclides. - Highlights: • A modified sum-peak method for simple radioactivity measurement was developed. • The method solely requires the peak count rates and the sum peak count rate. • The method is applicable to multiple radionuclides.

  4. Subset-sum phase transitions and data compression

    Science.gov (United States)

    Merhav, Neri

    2011-09-01

    We propose a rigorous analysis approach for the subset-sum problem in the context of lossless data compression, where the phase transition of the subset-sum problem is directly related to the passage between ambiguous and non-ambiguous decompression, for a compression scheme that is based on specifying the sequence composition. The proposed analysis lends itself to straightforward extensions in several directions of interest, including non-binary alphabets, incorporation of side information at the decoder (Slepian-Wolf coding), and coding schemes based on multiple subset sums. It is also demonstrated that the proposed technique can be used to analyze the critical behavior in a more involved situation where the sequence composition is not specified by the encoder.

  5. Performance analysis of nuclear materials accounting systems

    International Nuclear Information System (INIS)

    Cobb, D.D.; Shipley, J.P.

    1979-01-01

    Techniques for analyzing the level of performance of nuclear materials accounting systems in terms of the four performance measures, total amount of loss, loss-detection time, loss-detection probability, and false-alarm probability, are presented. These techniques are especially useful for analyzing the expected performance of near-real-time (dynamic) accounting systems. A conservative estimate of system performance is provided by the CUSUM (cumulative summation of materials balances) test. Graphical displays, called performance surfaces, are developed as convenient tools for representing systems performance, and examples from a recent safeguards study of a nuclear fuels reprocessing plant are given. 6 refs

  6. Spectral function sum rules in quantum chromodynamics. I. Charged currents sector

    International Nuclear Information System (INIS)

    Floratos, E.G.; Narison, Stephan; Rafael, Eduardo de.

    1978-07-01

    The Weinberg sum rules of the algebra of currents are reconsidered in the light of quantum chromodynamics (QCD). The authors derive new finite energy sum rules which replace the old Weinberg sum rules. The new sum rules are convergent and the rate of convergence is explicitly calculated in perturbative QCD at the one loop approximation. Phenomenological applications of these sum rules in the charged current sector are also discussed

  7. Structural relations between nested harmonic sums

    International Nuclear Information System (INIS)

    Bluemlein, J.

    2008-07-01

    We describe the structural relations between nested harmonic sums emerging in the description of physical single scale quantities up to the 3-loop level in renormalizable gauge field theories. These are weight w=6 harmonic sums. We identify universal basic functions which allow to describe a large class of physical quantities and derive their complex analysis. For the 3-loop QCD Wilson coefficients 35 basic functions are required, whereas a subset of 15 describes the 3-loop anomalous dimensions. (orig.)

  8. Structural relations between nested harmonic sums

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.

    2008-07-15

    We describe the structural relations between nested harmonic sums emerging in the description of physical single scale quantities up to the 3-loop level in renormalizable gauge field theories. These are weight w=6 harmonic sums. We identify universal basic functions which allow to describe a large class of physical quantities and derive their complex analysis. For the 3-loop QCD Wilson coefficients 35 basic functions are required, whereas a subset of 15 describes the 3-loop anomalous dimensions. (orig.)

  9. The α3S corrections to the Bjorken sum rule for polarized electro-production and to the Gross-Llewellyn Smith sum rule

    International Nuclear Information System (INIS)

    Larin, S.A.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica; Vermaseren, J.A.M.

    1990-01-01

    The next-next-to-leading order QCD corrections to the Gross-Llewellyn Smith sum rule for deep inelastic neutrino-nucleon scattering and to the Bjorken sum rule for polarized electron-nucleon scattering have been computed. This involved the proper treatment of γ 5 inside the loop integrals with dimensional regularization. It is found that the difference between the two sum rules are entirely due to a class of 6 three loop graphs and is of the order of 1% of the leading QCD term. Hence the Q 2 behavior of both sum rules should be the same if the physics is described adequately by the lower order terms of perturbative QCD. (author). 12 refs.; 2 figs.; 4 tabs

  10. Cosmic-ray sum rules

    International Nuclear Information System (INIS)

    Frandsen, Mads T.; Masina, Isabella; Sannino, Francesco

    2011-01-01

    We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays; we show how they can be used to predict the positron fraction at energies not yet explored by current experiments, and to constrain specific models.

  11. Statistical Control Charts: Performances of Short Term Stock Trading in Croatia

    Directory of Open Access Journals (Sweden)

    Dumičić Ksenija

    2015-03-01

    Full Text Available Background: The stock exchange, as a regulated financial market, in modern economies reflects their economic development level. The stock market indicates the mood of investors in the development of a country and is an important ingredient for growth. Objectives: This paper aims to introduce an additional statistical tool used to support the decision-making process in stock trading, and it investigate the usage of statistical process control (SPC methods into the stock trading process. Methods/Approach: The individual (I, exponentially weighted moving average (EWMA and cumulative sum (CUSUM control charts were used for gaining trade signals. The open and the average prices of CROBEX10 index stocks on the Zagreb Stock Exchange were used in the analysis. The statistical control charts capabilities for stock trading in the short-run were analysed. Results: The statistical control chart analysis pointed out too many signals to buy or sell stocks. Most of them are considered as false alarms. So, the statistical control charts showed to be not so much useful in stock trading or in a portfolio analysis. Conclusions: The presence of non-normality and autocorellation has great impact on statistical control charts performances. It is assumed that if these two problems are solved, the use of statistical control charts in a portfolio analysis could be greatly improved.

  12. El Carreto o Cumulá - Aspidosperma Dugandii Standl El Carreto o Cumulá - Aspidosperma Dugandii Standl

    Directory of Open Access Journals (Sweden)

    Dugand Armando

    1944-03-01

    Full Text Available Nombres vulgares: Carreto (Atlántico, Bolívar, Magdalena; Cumulá, Cumulá (Cundinamarca, ToIima. Según el Dr. Emilio Robledo (Lecciones de Bot. ed. 3, 2: 544. 1939 el nombre Carreto también es empleado en Puerto Berrío (Antioquia. El mismo autor (loc. cit. da el nombre Comulá para una especie indeterminada de Viburnum en Mariquita (Tolima y J. M. Duque, refiriendose a la misma planta y localidad (en Bot. Gen. Colomb. 340, 356. 1943 atribuye este nombre vulgar al Aspidosperma ellipticum Rusby.  Sin embargo, las muestras de madera de Cumulá o Comulá que yo he examinado, procedentes de la región de Mariquita -una de las cuales me fue recientemente enviada por el distinguido ictiólogo Sr. Cecil Miles- pertenecen sin duda alguna al A. Dugandii StandI. Por otra parte, Santiago Cortés (FI. Colomb. 206. 1898; ed, 2: 239. 1912 cita el Cumulá "de Anapoima y otros lugares del (rio Magdalena" diciendo que pertenece a las Leguminosas, pero la brevísima descripción que este autor hace de la madera "naranjada y notable por densidad, dureza y resistencia a la humedad", me induce a creer que se trata del mismo Cumula coleccionado recientemente en Tocaima, ya que esta población esta situada a pocos kilómetros de Anapoima. Nombres vulgares: Carreto (Atlántico, Bolívar, Magdalena; Cumulá, Cumulá (Cundinamarca, ToIima. Según el Dr. Emilio Robledo (Lecciones de Bot. ed. 3, 2: 544. 1939 el nombre Carreto también es empleado en Puerto Berrío (Antioquia. El mismo autor (loc. cit. da el nombre Comulá para una especie indeterminada de Viburnum en Mariquita (Tolima y J. M. Duque, refiriendose a la misma planta y localidad (en Bot. Gen. Colomb. 340, 356. 1943 atribuye este nombre vulgar al Aspidosperma ellipticum Rusby.  Sin embargo, las muestras de madera de Cumulá o Comulá que yo he examinado, procedentes de la región de Mariquita -una de las cuales me fue recientemente enviada por el distinguido ictiólogo Sr. Cecil Miles- pertenecen sin

  13. Experimental results of the betatron sum resonance

    International Nuclear Information System (INIS)

    Wang, Y.; Ball, M.; Brabson, B.

    1993-06-01

    The experimental observations of motion near the betatron sum resonance, ν x + 2ν z = 13, are presented. A fast quadrupole (Panofsky-style ferrite picture-frame magnet with a pulsed power supplier) producing a betatron tune shift of the order of 0.03 at rise time of 1 μs was used. This quadrupole was used to produce betatron tunes which jumped past and then crossed back through a betatron sum resonance line. The beam response as function of initial betatron amplitudes were recorded turn by turn. The correlated growth of the action variables, J x and J z , was observed. The phase space plots in the resonance frame reveal the features of particle motion near the nonlinear sum resonance region

  14. Predicting Cumulative Incidence Probability by Direct Binomial Regression

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...

  15. The Algebra of the Cumulative Percent Operation.

    Science.gov (United States)

    Berry, Andrew J.

    2002-01-01

    Discusses how to help students avoid some pervasive reasoning errors in solving cumulative percent problems. Discusses the meaning of ."%+b%." the additive inverse of ."%." and other useful applications. Emphasizes the operational aspect of the cumulative percent concept. (KHR)

  16. Sum rules for the quarkonium systems

    International Nuclear Information System (INIS)

    Burnel, A.; Caprasse, H.

    1980-01-01

    In the framework of the radial Schroedinger equation we derive in a very simple way sum rules relating the potential to physical quantities such as the energy eigenvalues and the square of the lth derivative of the eigenfunctions at the origin. These sum rules contain as particular cases well-known results such as the quantum version of the Clausius theorem in classical mechanics as well as Kramers's relations for the Coulomb potential. Several illustrations are given and the possibilities of applying them to the quarkonium systems are considered

  17. On contribution of instantons to nucleon sum rules

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Kochelev, N.I.

    1989-01-01

    The contribution of instantons to nucleon QCD sum rules is obtained. It is shown that this contribution does provide stabilization of the sum rules and leads to formation of a nucleon as a bound state of quarks in the instanton field. 17 refs.; 3 figs

  18. Compton scattering from nuclei and photo-absorption sum rules

    International Nuclear Information System (INIS)

    Gorchtein, Mikhail; Hobbs, Timothy; Londergan, J. Timothy; Szczepaniak, Adam P.

    2011-01-01

    We revisit the photo-absorption sum rule for real Compton scattering from the proton and from nuclear targets. In analogy with the Thomas-Reiche-Kuhn sum rule appropriate at low energies, we propose a new 'constituent quark model' sum rule that relates the integrated strength of hadronic resonances to the scattering amplitude on constituent quarks. We study the constituent quark model sum rule for several nuclear targets. In addition, we extract the α=0 pole contribution for both proton and nuclei. Using the modern high-energy proton data, we find that the α=0 pole contribution differs significantly from the Thomson term, in contrast with the original findings by Damashek and Gilman.

  19. Adler-Weisberger sum rule for WLWL→WLWL scattering

    International Nuclear Information System (INIS)

    Pham, T.N.

    1991-01-01

    We analyse the Adler-Weisberger sum rule for W L W L →W L W L scattering. We find that at some energy, the W L W L total cross section must be large to saturate the sum rule. Measurements at future colliders would be needed to check the sum rule and to obtain the decay rates Γ(H→W L W L , Z L Z L ) which would be modified by the existence of a P-wave vector meson resonance in the standard model with strongly interacting Higgs sector or in technicolour models. (orig.)

  20. Parity of Θ+(1540) from QCD sum rules

    International Nuclear Information System (INIS)

    Lee, Su Houng; Kim, Hungchong; Kwon, Youngshin

    2005-01-01

    The QCD sum rule for the pentaquark Θ + , first analyzed by Sugiyama, Doi and Oka, is reanalyzed with a phenomenological side that explicitly includes the contribution from the two-particle reducible kaon-nucleon intermediate state. The magnitude for the overlap of the Θ + interpolating current with the kaon-nucleon state is obtained by using soft-kaon theorem and a separate sum rule for the ground state nucleon with the pentaquark nucleon interpolating current. It is found that the K-N intermediate state constitutes only 10% of the sum rule so that the original claim that the parity of Θ + is negative remains valid

  1. sumé

    African Journals Online (AJOL)

    Tracie1

    sumé. L'activité traduisant est un processus très compliqué qui exige la connaissance extralinguistique chez le traducteur. Ce travail est basé sur la traduction littéraire. La traduction littéraire consistedes textes littéraires que comprennent la poésie, le théâtre, et la prose. La traduction littéraire a quelques problèmes ...

  2. EXAFS cumulants of CdSe

    International Nuclear Information System (INIS)

    Diop, D.

    1997-04-01

    EXAFS functions had been extracted from measurements on the K edge of Se at different temperatures between 20 and 300 K. The analysis of the EXAFS of the filtered first two shells has been done in the wavevector range laying between 2 and 15.5 A -1 in terms of the cumulants of the effective distribution of distances. The cumulants C 3 and C 4 obtained from the phase difference and the amplitude ratio methods have shown the anharmonicity in the vibrations of atoms around their equilibrium position. (author). 13 refs, 3 figs

  3. Premium Pricing of Liability Insurance Using Random Sum Model

    OpenAIRE

    Kartikasari, Mujiati Dwi

    2017-01-01

    Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we ...

  4. Coulomb sum rules in the relativistic Fermi gas model

    International Nuclear Information System (INIS)

    Do Dang, G.; L'Huillier, M.; Nguyen Giai, Van.

    1986-11-01

    Coulomb sum rules are studied in the framework of the Fermi gas model. A distinction is made between mathematical and observable sum rules. Differences between non-relativistic and relativistic Fermi gas predictions are stressed. A method to deduce a Coulomb response function from the longitudinal response is proposed and tested numerically. This method is applied to the 40 Ca data to obtain the experimental Coulomb sum rule as a function of momentum transfer

  5. A 2-categorical state sum model

    Energy Technology Data Exchange (ETDEWEB)

    Baratin, Aristide, E-mail: abaratin@uwaterloo.ca [Department of Applied Mathematics, University of Waterloo, 200 University Ave W, Waterloo, Ontario N2L 3G1 (Canada); Freidel, Laurent, E-mail: lfreidel@perimeterinstitute.ca [Perimeter Institute for Theoretical Physics, 31 Caroline Str. N, Waterloo, Ontario N2L 2Y5 (Canada)

    2015-01-15

    It has long been argued that higher categories provide the proper algebraic structure underlying state sum invariants of 4-manifolds. This idea has been refined recently, by proposing to use 2-groups and their representations as specific examples of 2-categories. The challenge has been to make these proposals fully explicit. Here, we give a concrete realization of this program. Building upon our earlier work with Baez and Wise on the representation theory of 2-groups, we construct a four-dimensional state sum model based on a categorified version of the Euclidean group. We define and explicitly compute the simplex weights, which may be viewed a categorified analogue of Racah-Wigner 6j-symbols. These weights solve a hexagon equation that encodes the formal invariance of the state sum under the Pachner moves of the triangulation. This result unravels the combinatorial formulation of the Feynman amplitudes of quantum field theory on flat spacetime proposed in A. Baratin and L. Freidel [Classical Quantum Gravity 24, 2027–2060 (2007)] which was shown to lead after gauge-fixing to Korepanov’s invariant of 4-manifolds.

  6. Methods for the analysis of complex fluorescence decays: sum of Becquerel functions versus sum of exponentials

    International Nuclear Information System (INIS)

    Menezes, Filipe; Fedorov, Alexander; Baleizão, Carlos; Berberan-Santos, Mário N; Valeur, Bernard

    2013-01-01

    Ensemble fluorescence decays are usually analyzed with a sum of exponentials. However, broad continuous distributions of lifetimes, either unimodal or multimodal, occur in many situations. A simple and flexible fitting function for these cases that encompasses the exponential is the Becquerel function. In this work, the applicability of the Becquerel function for the analysis of complex decays of several kinds is tested. For this purpose, decays of mixtures of four different fluorescence standards (binary, ternary and quaternary mixtures) are measured and analyzed. For binary and ternary mixtures, the expected sum of narrow distributions is well recovered from the Becquerel functions analysis, if the correct number of components is used. For ternary mixtures, however, satisfactory fits are also obtained with a number of Becquerel functions smaller than the true number of fluorophores in the mixture, at the expense of broadening the lifetime distributions of the fictitious components. The quaternary mixture studied is well fitted with both a sum of three exponentials and a sum of two Becquerel functions, showing the inevitable loss of information when the number of components is large. Decays of a fluorophore in a heterogeneous environment, known to be represented by unimodal and broad continuous distributions (as previously obtained by the maximum entropy method), are also measured and analyzed. It is concluded that these distributions can be recovered by the Becquerel function method with an accuracy similar to that of the much more complex maximum entropy method. It is also shown that the polar (or phasor) plot is not always helpful for ascertaining the degree (and kind) of complexity of a fluorescence decay. (paper)

  7. Sum rules for nuclear excitations with the Skyrme-Landau interaction

    International Nuclear Information System (INIS)

    Liu Kehfei; Luo Hongde; Ma Zhongyu; Feng Man; Shen Qingbiao

    1991-01-01

    The energy-weighted sum rules for electric, magnetic, Fermi and Gamow-Teller transitions with the Skyrme-Landau interaction are derived from the double commutators and numerically calculated in a HF + RPA formalism. As a numerical check of the Thouless theorem, our self-consistent calculations show that the calculated RPA strengths exhaust more than 85% of the sum rules in most cases. The well known non-energy-weighted sum rules for Fermi and Gamow-Teller transitions are also checked numerically. The sum rules are exhausted by more than 94% in these cases. (orig.)

  8. Partial sums of arithmetical functions with absolutely convergent ...

    Indian Academy of Sciences (India)

    Keywords. Ramanujan expansions; average order; error terms; sum-of-divisors functions; Jordan's totient functions. 2010 Mathematics Subject Classification. 11N37, 11A25, 11K65. 1. Introduction. The theory of Ramanujan sums and Ramanujan expansions has emerged from the seminal article [10] of Ramanujan. In 1918 ...

  9. Light cone sum rules in nonabelian gauge field theory

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Bern Univ. (Switzerland). Inst. fuer Theoretische Physik

    1981-03-24

    The author examines, in the context of nonabelian gauge field theory, the derivation of the light cone sum rules which were obtained earlier on the assumption of dominance of canonical singularity in the current commutator on the light cone. The retarded scaling functions appearing in the sum rules are numbers known in terms of the charges of the quarks and the number of quarks and gluons in the theory. Possible applications of the sum rules are suggested.

  10. Light cone sum rules for single-pion electroproduction

    International Nuclear Information System (INIS)

    Mallik, S.

    1978-01-01

    Light cone dispersion sum rules (of low energy and superconvergence types) are derived for nucleon matrix elements of the commutator involving electromagnetic and divergence of axial vector currents. The superconvergence type sum rules in the fixed mass limit are rewritten without requiring the knowledge of Regge subtractions. The retarded scaling functions occurring in these sum rules are evaluated within the framework of quark light cone algebra of currents. Besides a general consistency check of the framework underlying the derivation, the author infers, on the basis of crude evaluation of scaling functions, an upper limit of 100 MeV for the bare mass of nonstrange quarks. (Auth.)

  11. Spectral sum rules for the three-body problem

    International Nuclear Information System (INIS)

    Bolle, D.; Osborn, T.A.

    1982-01-01

    This paper derives a number of sum rules for nonrelativistic three-body scattering. These rules are valid for any finite region μ in the six-dimensional coordinate space. They relate energy moments of the trace of the onshell time-delay operator to the energy-weighted probability for finding the three-body bound-state wave functions in the region μ. If μ is all of the six-dimensional space, the global form of the sum rules is obtained. In this form the rules constitute higher-order Levinson's theorems for the three-body problem. Finally, the sum rules are extended to allow the energy momtns have complex powers

  12. Moessbauer sum rules for use with synchrotron sources

    International Nuclear Information System (INIS)

    Lipkin, Harry J.

    1999-01-01

    The availability of tunable synchrotron radiation sources with millivolt resolution has opened new prospects for exploring dynamics of complex systems with Moessbauer spectroscopy. Early Moessbauer treatments and moment sum rules are extended to treat inelastic excitations measured in synchrotron experiments, with emphasis on the unique new conditions absent in neutron scattering and arising in resonance scattering: prompt absorption, delayed emission, recoil-free transitions and coherent forward scattering. The first moment sum rule normalizes the inelastic spectrum. New sum rules obtained for higher moments include the third moment proportional to the second derivative of the potential acting on the Moessbauer nucleus and independent of temperature in the the harmonic approximation

  13. Faraday effect revisited: sum rules and convergence issues

    DEFF Research Database (Denmark)

    Cornean, Horia; Nenciu, Gheorghe

    2010-01-01

    This is the third paper of a series revisiting the Faraday effect. The question of the absolute convergence of the sums over the band indices entering the Verdet constant is considered. In general, sum rules and traces per unit volume play an important role in solid-state physics, and they give...

  14. About the cumulants of periodic signals

    Science.gov (United States)

    Barrau, Axel; El Badaoui, Mohammed

    2018-01-01

    This note studies cumulants of time series. These functions originating from the probability theory being commonly used as features of deterministic signals, their classical properties are examined in this modified framework. We show additivity of cumulants, ensured in the case of independent random variables, requires here a different hypothesis. Practical applications are proposed, in particular an analysis of the failure of the JADE algorithm to separate some specific periodic signals.

  15. On the fluctuations of sums of independent random variables.

    Science.gov (United States)

    Feller, W

    1969-07-01

    If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.

  16. Statistical sum of bosonic string, compactified on an orbifold

    International Nuclear Information System (INIS)

    Morozov, A.; Ol'shanetskij, M.

    1986-01-01

    Expression for statistical sum of bosonic string, compactified on a singular orbifold, is presented. All the information about the orbifold is encoded the specific combination of theta-functions, which the statistical sum is expressed through

  17. Cumulative Student Loan Debt in Minnesota, 2015

    Science.gov (United States)

    Williams-Wyche, Shaun

    2016-01-01

    To better understand student debt in Minnesota, the Minnesota Office of Higher Education (the Office) gathers information on cumulative student loan debt from Minnesota degree-granting institutions. These data detail the number of students with loans by institution, the cumulative student loan debt incurred at that institution, and the percentage…

  18. Efficient yellow beam generation by intracavity sum frequency ...

    Indian Academy of Sciences (India)

    2014-02-06

    Feb 6, 2014 ... We present our studies on dual wavelength operation using a single Nd:YVO4 crystal and its intracavity sum frequency generation by considering the influence of the thermal lensing effect on the performance of the laser. A KTP crystal cut for type-II phase matching was used for intracavity sum frequency ...

  19. A set of sums for continuous dual q-2-Hahn polynomials

    International Nuclear Information System (INIS)

    Gade, R. M.

    2009-01-01

    An infinite set {τ (l) (y;r,z)} r,lisanelementofN 0 of linear sums of continuous dual q -2 -Hahn polynomials with prefactors depending on a complex parameter z is studied. The sums τ (l) (y;r,z) have an interpretation in context with tensor product representations of the quantum affine algebra U q ' (sl(2)) involving both a positive and a negative discrete series representation. For each l>0, the sum τ (l) (y;r,z) can be expressed in terms of the sum τ (0) (y;r,z), continuous dual q 2 -Hahn polynomials, and their associated polynomials. The sum τ (0) (y;r,z) is obtained as a combination of eight basic hypergeometric series. Moreover, an integral representation is provided for the sums τ (l) (y;r,z) with the complex parameter restricted by |zq| -2 -Hahn polynomials.

  20. Derivation of sum rules for quark and baryon fields

    International Nuclear Information System (INIS)

    Bongardt, K.

    1978-01-01

    In an analogous way to the Weinberg sum rules, two spectral-function sum rules for quark and baryon fields are derived by means of the concept of lightlike charges. The baryon sum rules are valid for the case of SU 3 as well as for SU 4 and the one-particle approximation yields a linear mass relation. This relation is not in disagreement with the normal linear GMO formula for the baryons. The calculated masses of the first resonance states agree very well with the experimental data

  1. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  2. On the Computation of Correctly Rounded Sums

    DEFF Research Database (Denmark)

    Kornerup, Peter; Lefevre, Vincent; Louvet, Nicolas

    2012-01-01

    This paper presents a study of some basic blocks needed in the design of floating-point summation algorithms. In particular, in radix-2 floating-point arithmetic, we show that among the set of the algorithms with no comparisons performing only floating-point additions/subtractions, the 2Sum...... algorithm introduced by Knuth is minimal, both in terms of number of operations and depth of the dependency graph. We investigate the possible use of another algorithm, Dekker's Fast2Sum algorithm, in radix-10 arithmetic. We give methods for computing, in radix 10, the floating-point number nearest...... the average value of two floating-point numbers. We also prove that under reasonable conditions, an algorithm performing only round-to-nearest additions/subtractions cannot compute the round-to-nearest sum of at least three floating-point numbers. Starting from an algorithm due to Boldo and Melquiond, we also...

  3. 29 CFR 4044.75 - Other lump sum benefits.

    Science.gov (United States)

    2010-07-01

    ... sum benefits. The value of a lump sum benefit which is not covered under § 4044.73 or § 4044.74 is equal to— (a) The value under the qualifying bid, if an insurer provides the benefit; or (b) The present value of the benefit as of the date of distribution, determined using reasonable actuarial assumptions...

  4. New QCD sum rules for nucleon axial-vector coupling constants

    International Nuclear Information System (INIS)

    Lee, F.X.; Leinweber, D.B.; Jin, X.

    1997-01-01

    Two new sets of QCD sum rules for the nucleon axial-vector coupling constants are derived using the external-field technique and generalized interpolating fields. An in-depth study of the predicative ability of these sum rules is carried out using a Monte Carlo based uncertainty analysis. The results show that the standard implementation of the QCD sum rule method has only marginal predicative power for the nucleon axial-vector coupling constants, as the relative errors are large. The errors range from approximately 50% to 100% compared to the nucleon mass obtained from the same method, which has only a 10%- 25% error. The origin of the large errors is examined. Previous analyses of these coupling constants are based on sum rules that have poor operator product expansion convergence and large continuum contributions. Preferred sum rules are identified and their predictions are obtained. We also investigate the new sum rules with an alternative treatment of the problematic transitions which are not exponentially suppressed in the standard treatment. The alternative treatment provides exponential suppression of their contributions relative to the ground state. Implications for other nucleon current matrix elements are also discussed. copyright 1997 The American Physical Society

  5. High cumulants of conserved charges and their statistical uncertainties

    Science.gov (United States)

    Li-Zhu, Chen; Ye-Yin, Zhao; Xue, Pan; Zhi-Ming, Li; Yuan-Fang, Wu

    2017-10-01

    We study the influence of measured high cumulants of conserved charges on their associated statistical uncertainties in relativistic heavy-ion collisions. With a given number of events, the measured cumulants randomly fluctuate with an approximately normal distribution, while the estimated statistical uncertainties are found to be correlated with corresponding values of the obtained cumulants. Generally, with a given number of events, the larger the cumulants we measure, the larger the statistical uncertainties that are estimated. The error-weighted averaged cumulants are dependent on statistics. Despite this effect, however, it is found that the three sigma rule of thumb is still applicable when the statistics are above one million. Supported by NSFC (11405088, 11521064, 11647093), Major State Basic Research Development Program of China (2014CB845402) and Ministry of Science and Technology (MoST) (2016YFE0104800)

  6. On the Laplace transform of the Weinberg type sum rules

    International Nuclear Information System (INIS)

    Narison, S.

    1981-09-01

    We consider the Laplace transform of various sum rules of the Weinberg type including the leading non-perturbative effects. We show from the third type Weinberg sum rules that 7.5 to 8.9 1 coupling to the W boson, while the second sum rule gives an upper bound on the A 1 mass (Msub(A 1 ) < or approx. 1.25 GeV). (author)

  7. Proximinality in generalized direct sums

    Directory of Open Access Journals (Sweden)

    Darapaneni Narayana

    2004-01-01

    Full Text Available We consider proximinality and transitivity of proximinality for subspaces of finite codimension in generalized direct sums of Banach spaces. We give several examples of Banach spaces where proximinality is transitive among subspaces of finite codimension.

  8. Measurement sum theory and application - Application to low level measurements

    International Nuclear Information System (INIS)

    Puydarrieux, S.; Bruel, V.; Rivier, C.; Crozet, M.; Vivier, A.; Manificat, G.; Thaurel, B.; Mokili, M.; Philippot, B.; Bohaud, E.

    2015-09-01

    In laboratories, most of the Total Sum methods implemented today use substitution or censure methods for nonsignificant or negative values, and thus create biases which can sometimes be quite large. They are usually positive, and generate, for example, becquerel (Bq) counting or 'administrative' quantities of materials (= 'virtual'), thus artificially falsifying the records kept by the laboratories under regulatory requirements (environment release records, waste records, etc.). This document suggests a methodology which will enable the user to avoid such biases. It is based on the following two fundamental rules: - The Total Sum of measurement values must be established based on all the individual measurement values, even those considered non-significant including the negative values. Any modification of these values, under the pretext that they are not significant, will inevitably lead to biases in the accumulated result and falsify the evaluation of its uncertainty. - In Total Sum operations, the decision thresholds are arrived at in a similar way to the approach used for uncertainties. The document deals with four essential aspects of the notion of 'measurement Total Sums': - The expression of results and associated uncertainties close to Decision Thresholds, and Detection or Quantification Limits, - The Total Sum of these measurements: sum or mean, - The calculation of the uncertainties associated with the Total Sums, - Result presentation (particularly when preparing balance sheets or reports, etc.) Several case studies arising from different situations are used to illustrate the methodology: environmental monitoring reports, release reports, and chemical impurity Total Sums for the qualification of a finished product. The special case of material balances, in which the measurements are usually all significant and correlated (the covariance term cannot then be ignored) will be the subject of a future second document. This

  9. Cumulative stress and autonomic dysregulation in a community sample.

    Science.gov (United States)

    Lampert, Rachel; Tuit, Keri; Hong, Kwang-Ik; Donovan, Theresa; Lee, Forrester; Sinha, Rajita

    2016-05-01

    Whether cumulative stress, including both chronic stress and adverse life events, is associated with decreased heart rate variability (HRV), a non-invasive measure of autonomic status which predicts poor cardiovascular outcomes, is unknown. Healthy community dwelling volunteers (N = 157, mean age 29 years) participated in the Cumulative Stress/Adversity Interview (CAI), a 140-item event interview measuring cumulative adversity including major life events, life trauma, recent life events and chronic stressors, and underwent 24-h ambulatory ECG monitoring. HRV was analyzed in the frequency domain and standard deviation of NN intervals (SDNN) calculated. Initial simple regression analyses revealed that total cumulative stress score, chronic stressors and cumulative adverse life events (CALE) were all inversely associated with ultra low-frequency (ULF), very low-frequency (VLF) and low-frequency (LF) power and SDNN (all p accounting for additional appreciable variance. For VLF and LF, both total cumulative stress and chronic stress significantly contributed to the variance alone but were not longer significant after adjusting for race and health behaviors. In summary, total cumulative stress, and its components of adverse life events and chronic stress were associated with decreased cardiac autonomic function as measured by HRV. Findings suggest one potential mechanism by which stress may exert adverse effects on mortality in healthy individuals. Primary preventive strategies including stress management may prove beneficial.

  10. Integrals of Lagrange functions and sum rules

    Energy Technology Data Exchange (ETDEWEB)

    Baye, Daniel, E-mail: dbaye@ulb.ac.be [Physique Quantique, CP 165/82, Universite Libre de Bruxelles, B 1050 Bruxelles (Belgium); Physique Nucleaire Theorique et Physique Mathematique, CP 229, Universite Libre de Bruxelles, B 1050 Bruxelles (Belgium)

    2011-09-30

    Exact values are derived for some matrix elements of Lagrange functions, i.e. orthonormal cardinal functions, constructed from orthogonal polynomials. They are obtained with exact Gauss quadratures supplemented by corrections. In the particular case of Lagrange-Laguerre and shifted Lagrange-Jacobi functions, sum rules provide exact values for matrix elements of 1/x and 1/x{sup 2} as well as for the kinetic energy. From these expressions, new sum rules involving Laguerre and shifted Jacobi zeros and weights are derived. (paper)

  11. Cumulative processes and quark distribution in nuclei

    International Nuclear Information System (INIS)

    Kondratyuk, L.; Shmatikov, M.

    1984-01-01

    Assuming existence of multiquark (mainly 12q) bags in nuclei the spectra of cumulative nucleons and mesons produced in high-energy particle-nucleus collisions are discussed. The exponential form of quark momentum distribution in 12q-bag (agreeing well with the experimental data on lepton-nucleus interactions at large q 2 ) is shown to result in quasi-exponential distribution of cumulative particles over the light-cone variable αsub(B). The dependence of f(αsub(B); psub(perpendicular)) (where psub(perpendicular) is the transverse momentum of the bag) upon psub(perpendicular) is considered. The yields of cumulative resonances as well as effects related to the u- and d-quark distributions in N > Z nuclei being different are dicscussed

  12. Predicting Cumulative Incidence Probability: Marginal and Cause-Specific Modelling

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2005-01-01

    cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling......cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling...

  13. Decision analysis with cumulative prospect theory.

    Science.gov (United States)

    Bayoumi, A M; Redelmeier, D A

    2000-01-01

    Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.

  14. Luttinger and Hubbard sum rules: are they compatible?

    International Nuclear Information System (INIS)

    Matho, K.

    1992-01-01

    A so-called Hubbard sum rule determines the weight of a satellite in fermionic single-particle excitations with strong local repulsion (U→∞). Together with the Luttinger sum rule, this imposes two different energy scales on the remaining finite excitations. In the Hubbard chain, this has been identified microscopically as being due to a separation of spin and charge. (orig.)

  15. A Shuttle Upper Atmosphere Mass Spectrometer /SUMS/ experiment

    Science.gov (United States)

    Blanchard, R. C.; Duckett, R. J.; Hinson, E. W.

    1982-01-01

    A magnetic mass spectrometer is currently being adapted to the Space Shuttle Orbiter to provide repeated high altitude atmosphere data to support in situ rarefied flow aerodynamics research, i.e., in the high velocity, low density flight regime. The experiment, called Shuttle Upper Atmosphere Mass Spectrometer (SUMS), is the first attempt to design mass spectrometer equipment for flight vehicle aerodynamic data extraction. The SUMS experiment will provide total freestream atmospheric quantitites, principally total mass density, above altitudes at which conventional pressure measurements are valid. Experiment concepts, the expected flight profile, tradeoffs in the design of the total system and flight data reduction plans are discussed. Development plans are based upon a SUMS first flight after the Orbiter initial development flights.

  16. Use of exp(iS[x]) in the sum over histories

    International Nuclear Information System (INIS)

    Anderson, A.

    1994-01-01

    The use of tsumexp(iS[x]) is the generic form for a sum over histories in configuration space is discussed critically and placed in its proper context. The standard derivation of the sum over paths by discretizing the paths is reviewed, and it is shown that the form tsumexp(iS[x]) is justified only for Schroedinger-type systems which are at most second order in the momenta. Extending this derivation to the relativistic free particle, the causal Green's function is expressed as a sum over timelike paths, and the Feynman Green's function is expressed both as a sum over paths which only go one way in time and as a sum over paths which move forward and backward in time. The weighting of the paths is shown not to be exp(iS[x]) is any of these cases. The role of the inner product and the operator ordering of the wave equation in defining the sum over histories is discussed

  17. Inclusive sum rules and spectra of neutrons at the ISR

    International Nuclear Information System (INIS)

    Grigoryan, A.A.

    1975-01-01

    Neutron spectra in pp collisions at ISR energies are studied in the framework of sum rules for inclusive processes. The contributions of protons, π- and E- mesons to the energy sum rule are calculated at √5 = 53 GeV. It is shown by means of this sum rule that the spectra of neutrons at the ISR are in contradiction with the spectra of other particles also measured at the ISR

  18. Energy-weighted sum rules for mesons in hot and dense matter

    NARCIS (Netherlands)

    Cabrera, D.; Polls, A.; Ramos, A.; Tolos Rigueiro, Laura

    2009-01-01

    We study energy-weighted sum rules of the pion and kaon propagator in nuclear matter at finite temperature. The sum rules are obtained from matching the Dyson form of the meson propagator with its spectral Lehmann representation at low and high energies. We calculate the sum rules for specific

  19. Original and cumulative prospect theory: a discussion of empirical differences

    NARCIS (Netherlands)

    Wakker, P.P.; Fennema, H.

    1997-01-01

    This note discusses differences between prospect theory and cumulative prospect theory. It shows that cumulative prospect theory is not merely a formal correction of some theoretical problems in prospect theory, but it also gives different predictions. Experiments are described that favor cumulative

  20. Standardization of I-125. Sum-Peak Coincidence Counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    2011-01-01

    I-125 is a nuclide which presents difficulties for standardization. The sum-peak method is one of the procedures used to standardize this radionuclide. Initially NaI (Tl)detectors and then the semiconductor detectors with higher resolution have been used.This paper describes the different methods based on the sum-peak procedure and the different expressions used to calculate the activity are deduced. We describe a general procedure for obtaining all of the above equations and many more. We analyze the influence of uncertainties in the used parameters in the uncertainty of the activity. We give a complete example of the transmission of uncertainty and the effects of correlations in the uncertainty of the activity of the sample. High-resolution spectra show an unresolved doublet of 62.0 keV and 62.8 keV. The paper presents two approaches to solve this problem. One is based on the calculation of area ratio and the sum of peak areas obtained from atomic and nuclear data, in the other we modify the equations so that the sum of the peak areas doublet, rather than its components, is present. (Author) 19 refs.

  1. Standardization of I-125. Sum-Peak Coincidence Counting

    Energy Technology Data Exchange (ETDEWEB)

    Grau Carles, A.; Grau Malonda, A.

    2011-07-01

    I-125 is a nuclide which presents difficulties for standardization. The sum-peak method is one of the procedures used to standardize this radionuclide. Initially NaI (Tl)detectors and then the semiconductor detectors with higher resolution have been used.This paper describes the different methods based on the sum-peak procedure and the different expressions used to calculate the activity are deduced. We describe a general procedure for obtaining all of the above equations and many more. We analyze the influence of uncertainties in the used parameters in the uncertainty of the activity. We give a complete example of the transmission of uncertainty and the effects of correlations in the uncertainty of the activity of the sample. High-resolution spectra show an unresolved doublet of 62.0 keV and 62.8 keV. The paper presents two approaches to solve this problem. One is based on the calculation of area ratio and the sum of peak areas obtained from atomic and nuclear data, in the other we modify the equations so that the sum of the peak areas doublet, rather than its components, is present. (Author) 19 refs.

  2. Perspectives on cumulative risks and impacts.

    Science.gov (United States)

    Faust, John B

    2010-01-01

    Cumulative risks and impacts have taken on different meanings in different regulatory and programmatic contexts at federal and state government levels. Traditional risk assessment methodologies, with considerable limitations, can provide a framework for the evaluation of cumulative risks from chemicals. Under an environmental justice program in California, cumulative impacts are defined to include exposures, public health effects, or environmental effects in a geographic area from the emission or discharge of environmental pollution from all sources, through all media. Furthermore, the evaluation of these effects should take into account sensitive populations and socioeconomic factors where possible and to the extent data are available. Key aspects to this potential approach include the consideration of exposures (versus risk), socioeconomic factors, the geographic or community-level assessment scale, and the inclusion of not only health effects but also environmental effects as contributors to impact. Assessments of this type extend the boundaries of the types of information that toxicologists generally provide for risk management decisions.

  3. Cumulative particle production in the quark recombination model

    International Nuclear Information System (INIS)

    Gavrilov, V.B.; Leksin, G.A.

    1987-01-01

    Production of cumulative particles in hadron-nuclear inteactions at high energies is considered within the framework of recombination quark model. Predictions for inclusive cross sections of production of cumulative particles and different resonances containing quarks in s state are made

  4. Light-cone sum rules: A SCET-based formulation

    CERN Document Server

    De Fazio, F; Hurth, Tobias; Feldmann, Th.

    2007-01-01

    We describe the construction of light-cone sum rules (LCSRs) for exclusive $B$-meson decays into light energetic hadrons from correlation functions within soft-collinear effective theory (SCET). As an example, we consider the SCET sum rule for the $B \\to \\pi$ transition form factor at large recoil, including radiative corrections from hard-collinear loop diagrams at first order in the strong coupling constant.

  5. Complex-energy approach to sum rules within nuclear density functional theory

    Science.gov (United States)

    Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; Olsen, Erik

    2015-04-01

    Background: The linear response of the nucleus to an external field contains unique information about the effective interaction, the correlations governing the behavior of the many-body system, and the properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or the nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. Purpose: To establish an efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random-phase approximation (QRPA). Methods: To compute sum rules, we carry out contour integration of the response function in the complex-energy plane. We benchmark our results against the conventional matrix formulation of the QRPA theory, the Thouless theorem for the energy-weighted sum rule, and the dielectric theorem for the inverse-energy-weighted sum rule. Results: We derive the sum-rule expressions from the contour integration of the complex-energy FAM. We demonstrate that calculated sum-rule values agree with those obtained from the matrix formulation of the QRPA. We also discuss the applicability of both the Thouless theorem about the energy-weighted sum rule and the dielectric theorem for the inverse-energy-weighted sum rule to nuclear density functional theory in cases when the EDF is not based on a Hamiltonian. Conclusions: The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method

  6. Cumulative radiation exposure in children with cystic fibrosis.

    LENUS (Irish Health Repository)

    O'Reilly, R

    2010-02-01

    This retrospective study calculated the cumulative radiation dose for children with cystic fibrosis (CF) attending a tertiary CF centre. Information on 77 children with a mean age of 9.5 years, a follow up time of 658 person years and 1757 studies including 1485 chest radiographs, 215 abdominal radiographs and 57 computed tomography (CT) scans, of which 51 were thoracic CT scans, were analysed. The average cumulative radiation dose was 6.2 (0.04-25) mSv per CF patient. Cumulative radiation dose increased with increasing age and number of CT scans and was greater in children who presented with meconium ileus. No correlation was identified between cumulative radiation dose and either lung function or patient microbiology cultures. Radiation carries a risk of malignancy and children are particularly susceptible. Every effort must be made to avoid unnecessary radiation exposure in these patients whose life expectancy is increasing.

  7. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    Science.gov (United States)

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  8. The Subset Sum game.

    Science.gov (United States)

    Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim

    2014-03-16

    In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.

  9. Sums of Generalized Harmonic Series

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 20; Issue 9. Sums of Generalized Harmonic Series: For Kids from Five to Fifteen. Zurab Silagadze. General Article Volume 20 Issue 9 September 2015 pp 822-843. Fulltext. Click here to view fulltext PDF. Permanent link:

  10. 3He electron scattering sum rules

    International Nuclear Information System (INIS)

    Kim, Y.E.; Tornow, V.

    1982-01-01

    Electron scattering sum rules for 3 He are derived with a realistic ground-state wave function. The theoretical results are compared with the experimentally measured integrated cross sections. (author)

  11. Measurement of four-particle cumulants and symmetric cumulants with subevent methods in small collision systems with the ATLAS detector

    CERN Document Server

    Derendarz, Dominik; The ATLAS collaboration

    2018-01-01

    Measurements of symmetric cumulants SC(n,m)=⟨v2nv2m⟩−⟨v2n⟩⟨v2m⟩ for (n,m)=(2,3) and (2,4) and asymmetric cumulant AC(n) are presented in pp, p+Pb and peripheral Pb+Pb collisions at various collision energies, aiming to probe the long-range collective nature of multi-particle production in small systems. Results are obtained using the standard cumulant method, as well as the two-subevent and three-subevent cumulant methods. Results from the standard method are found to be strongly biased by non-flow correlations as indicated by strong sensitivity to the chosen event class definition. A systematic reduction of non-flow effects is observed when using the two-subevent method and the results become independent of event class definition when the three-subevent method is used. The measured SC(n,m) shows an anti-correlation between v2 and v3, and a positive correlation between v2 and v4. The magnitude of SC(n,m) is constant with Nch in pp collisions, but increases with Nch in p+Pb and Pb+Pb collisions. ...

  12. A toolbox for Harmonic Sums and their analytic continuations

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [RISC, J. Kepler University, Linz (Austria); Bluemlein, Johannes [DESY, Zeuthen (Germany)

    2010-07-01

    The package HarmonicSums implemented in the computer algebra system Mathematica is presented. It supports higher loop calculations in QCD and QED to represent single-scale quantities like anomalous dimensions and Wilson coefficients. The package allows to reduce general harmonic sums due to their algebraic and different structural relations. We provide a general framework for these reductions and the explicit representations up to weight w=8. For the use in experimental analyzes we also provide an analytic formalism to continue the harmonic sums form their integer arguments into the complex plane, which includes their recursions and asymptotic representations. The main ideas are illustrated by specific examples.

  13. Online Scheduling in Manufacturing A Cumulative Delay Approach

    CERN Document Server

    Suwa, Haruhiko

    2013-01-01

    Online scheduling is recognized as the crucial decision-making process of production control at a phase of “being in production" according to the released shop floor schedule. Online scheduling can be also considered as one of key enablers to realize prompt capable-to-promise as well as available-to-promise to customers along with reducing production lead times under recent globalized competitive markets. Online Scheduling in Manufacturing introduces new approaches to online scheduling based on a concept of cumulative delay. The cumulative delay is regarded as consolidated information of uncertainties under a dynamic environment in manufacturing and can be collected constantly without much effort at any points in time during a schedule execution. In this approach, the cumulative delay of the schedule has the important role of a criterion for making a decision whether or not a schedule revision is carried out. The cumulative delay approach to trigger schedule revisions has the following capabilities for the ...

  14. A Linear Time Algorithm for the k Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

     k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  15. Sum Rules, Classical and Quantum - A Pedagogical Approach

    Science.gov (United States)

    Karstens, William; Smith, David Y.

    2014-03-01

    Sum rules in the form of integrals over the response of a system to an external probe provide general analytical tools for both experiment and theory. For example, the celebrated f-sum rule gives a system's plasma frequency as an integral over the optical-dipole absorption spectrum regardless of the specific spectral distribution. Moreover, this rule underlies Smakula's equation for the number density of absorbers in a sample in terms of the area under their absorption bands. Commonly such rules are derived from quantum-mechanical commutation relations, but many are fundamentally classical (independent of ℏ) and so can be derived from more transparent mechanical models. We have exploited this to illustrate the fundamental role of inertia in the case of optical sum rules. Similar considerations apply to sum rules in many other branches of physics. Thus, the ``attenuation integral theorems'' of ac circuit theory reflect the ``inertial'' effect of Lenz's Law in inductors or the potential energy ``storage'' in capacitors. These considerations are closely related to the fact that the real and imaginary parts of a response function cannot be specified independently, a result that is encapsulated in the Kramers-Kronig relations. Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  16. Counter-ions at single charged wall: Sum rules.

    Science.gov (United States)

    Samaj, Ladislav

    2013-09-01

    For inhomogeneous classical Coulomb fluids in thermal equilibrium, like the jellium or the two-component Coulomb gas, there exists a variety of exact sum rules which relate the particle one-body and two-body densities. The necessary condition for these sum rules is that the Coulomb fluid possesses good screening properties, i.e. the particle correlation functions or the averaged charge inhomogeneity, say close to a wall, exhibit a short-range (usually exponential) decay. In this work, we study equilibrium statistical mechanics of an electric double layer with counter-ions only, i.e. a globally neutral system of equally charged point-like particles in the vicinity of a plain hard wall carrying a fixed uniform surface charge density of opposite sign. At large distances from the wall, the one-body and two-body counter-ion densities go to zero slowly according to the inverse-power law. In spite of the absence of screening, all known sum rules are shown to hold for two exactly solvable cases of the present system: in the weak-coupling Poisson-Boltzmann limit (in any spatial dimension larger than one) and at a special free-fermion coupling constant in two dimensions. This fact indicates an extended validity of the sum rules and provides a consistency check for reasonable theoretical approaches.

  17. Moessbauer sum rules for use with synchrotron sources

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1995-01-01

    The availability of tunable synchrotron radiation sources with millivolt resolution has opened prospects for exploring dynamics of complex systems with Moessbauer spectroscopy. Early Moessbauer treatments and moment sum rules are extended to treat inelastic excitations measured in synchrotron experiments, with emphasis on the unique conditions absent in neutron scattering and arising in resonance scattering: prompt absorption, delayed emission, recoilfree transitions, and coherent forward scattering. The first moment sum rule normalizes the inelastic spectrum. Sum rules obtained for higher moments include the third moment proportional to the second derivative of the potential acting on the Moessbauer nucleus and independent of temperature in the harmonic approximation. Interesting information may be obtained on the behavior of the potential acting on this nucleus in samples not easily investigated with neutron scattering, e.g., small samples, thin films, time-dependent structures, and amorphous-metallic high pressure phases

  18. QCD sum rule for nucleon in nuclear matter

    International Nuclear Information System (INIS)

    Mallik, S.; Sarkar, Sourav

    2010-01-01

    We consider the two-point function of nucleon current in nuclear matter and write a QCD sum rule to analyse the residue of the nucleon pole as a function of nuclear density. The nucleon self-energy needed for the sum rule is taken as input from calculations using phenomenological N N potential. Our result shows a decrease in the residue with increasing nuclear density, as is known to be the case with similar quantities. (orig.)

  19. GDH sum rule measurement at low Q2

    International Nuclear Information System (INIS)

    Bianchi, N.

    1996-01-01

    The Gerasimov-Drell-Hearn (GDH) sum rule is based on a general dispersive relation for the forward Compton scattering. Multipole analysis suggested the possible violation of the sum rule. Some propositions have been made to modify the original GDH expression. An effort is now being made in several laboratories to shred some light on this topic. The purposes of the different planned experiments are briefly presented according to their Q 2 range

  20. Structural relations of harmonic sums and Mellin transforms up to weight w=5

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes

    2009-01-15

    We derive the structural relations between the Mellin transforms of weighted Nielsen integrals emerging in the calculation of massless or massive single-scale quantities in QED and QCD, such as anomalous dimensions and Wilson coefficients, and other hard scattering cross sections depending on a single scale. The set of all multiple harmonic sums up to weight five cover the sums needed in the calculation of the 3-loop anomalous dimensions. The relations extend the set resulting from the quasi-shuffle product between harmonic sums studied earlier. Unlike the shuffle relations, they depend on the value of the quantities considered. Up to weight w=5, 242 nested harmonic sums contribute. In the present physical applications it is sufficient to consider the sub-set of harmonic sums not containing an index i=-1, which consists out of 69 sums. The algebraic relations reduce this set to 30 sums. Due to the structural relations a final reduction of the number of harmonic sums to 15 basic functions is obtained. These functions can be represented in terms of factorial series, supplemented by harmonic sums which are algebraically reducible. Complete analytic representations are given for these 15 meromorphic functions in the complex plane deriving their asymptotic- and recursion relations. A general outline is presented on the way nested harmonic sums and multiple zeta values emerge in higher order calculations of zero- and single scale quantities. (orig.)

  1. 338 Résumé

    African Journals Online (AJOL)

    ISONIC

    sumé. Cardisoma armatum, est une espèce de crabe de terre rencontrée en Afrique de l'ouest en particulier en ... optique suite au traitement histologique ont permis la mise en évidence de quelques critères d'identification de l'espèce et ...... En Côte d'Ivoire il n'est pas rare de voir durant les saisons propices. Cardisoma ...

  2. Evaluation of the convolution sum involving the sum of divisors function for 22, 44 and 52

    Directory of Open Access Journals (Sweden)

    Ntienjem Ebénézer

    2017-04-01

    \\end{array} $ where αβ = 22, 44, 52, is evaluated for all natural numbers n. Modular forms are used to achieve these evaluations. Since the modular space of level 22 is contained in that of level 44, we almost completely use the basis elements of the modular space of level 44 to carry out the evaluation of the convolution sums for αβ = 22. We then use these convolution sums to determine formulae for the number of representations of a positive integer by the octonary quadratic forms a(x12+x22+x32+x42+b(x52+x62+x72+x82, $a\\,(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+b\\,(x_{5}^{2}+x_{6}^{2}+x_{7}^{2}+x_{8}^{2},$ where (a, b = (1, 11, (1, 13.

  3. Improving cumulative effects assessment in Alberta: Regional strategic assessment

    International Nuclear Information System (INIS)

    Johnson, Dallas; Lalonde, Kim; McEachern, Menzie; Kenney, John; Mendoza, Gustavo; Buffin, Andrew; Rich, Kate

    2011-01-01

    The Government of Alberta, Canada is developing a regulatory framework to better manage cumulative environmental effects from development in the province. A key component of this effort is regional planning, which will lay the primary foundation for cumulative effects management into the future. Alberta Environment has considered the information needs of regional planning and has concluded that Regional Strategic Assessment may offer significant advantages if integrated into the planning process, including the overall improvement of cumulative environmental effects assessment in the province.

  4. A quality assurance initiative for commercial-scale production in high-throughput cryopreservation of blue catfish sperm.

    Science.gov (United States)

    Hu, E; Liao, T W; Tiersch, T R

    2013-10-01

    Cryopreservation of fish sperm has been studied for decades at a laboratory (research) scale. However, high-throughput cryopreservation of fish sperm has recently been developed to enable industrial-scale production. This study treated blue catfish (Ictalurus furcatus) sperm high-throughput cryopreservation as a manufacturing production line and initiated quality assurance plan development. The main objectives were to identify: (1) the main production quality characteristics; (2) the process features for quality assurance; (3) the internal quality characteristics and their specification designs; (4) the quality control and process capability evaluation methods, and (5) the directions for further improvements and applications. The essential product quality characteristics were identified as fertility-related characteristics. Specification design which established the tolerance levels according to demand and process constraints was performed based on these quality characteristics. Meanwhile, to ensure integrity throughout the process, internal quality characteristics (characteristics at each quality control point within process) that could affect fertility-related quality characteristics were defined with specifications. Due to the process feature of 100% inspection (quality inspection of every fish), a specific calculation method, use of cumulative sum (CUSUM) control charts, was applied to monitor each quality characteristic. An index of overall process evaluation, process capacity, was analyzed based on in-control process and the designed specifications, which further integrates the quality assurance plan. With the established quality assurance plan, the process could operate stably and quality of products would be reliable. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Min-Tsai Lai

    Shock model; cumulative damage model; cumulative repair cost limit; preventive maintenance model. 1. Introduction ... with two types of shocks: one type is failure shock, and the other type is damage ...... Theory, methods and applications.

  6. Finding Sums for an Infinite Class of Alternating Series

    Science.gov (United States)

    Chen, Zhibo; Wei, Sheng; Xiao, Xuerong

    2012-01-01

    Calculus II students know that many alternating series are convergent by the Alternating Series Test. However, they know few alternating series (except geometric series and some trivial ones) for which they can find the sum. In this article, we present a method that enables the students to find sums for infinitely many alternating series in the…

  7. An abstract approach to some spectral problems of direct sum differential operators

    Directory of Open Access Journals (Sweden)

    Maksim S. Sokolov

    2003-07-01

    Full Text Available In this paper, we study the common spectral properties of abstract self-adjoint direct sum operators, considered in a direct sum Hilbert space. Applications of such operators arise in the modelling of processes of multi-particle quantum mechanics, quantum field theory and, specifically, in multi-interval boundary problems of differential equations. We show that a direct sum operator does not depend in a straightforward manner on the separate operators involved. That is, on having a set of self-adjoint operators giving a direct sum operator, we show how the spectral representation for this operator depends on the spectral representations for the individual operators (the coordinate operators involved in forming this sum operator. In particular it is shown that this problem is not immediately solved by taking a direct sum of the spectral properties of the coordinate operators. Primarily, these results are to be applied to operators generated by a multi-interval quasi-differential system studied, in the earlier works of Ashurov, Everitt, Gesztezy, Kirsch, Markus and Zettl. The abstract approach in this paper indicates the need for further development of spectral theory for direct sum differential operators.

  8. On QCD sum rules of the Laplace transform type and light quark masses

    International Nuclear Information System (INIS)

    Narison, S.

    1981-04-01

    We discuss the relation between the usual dispersion relation sum rules and the Laplace transform type sum rules in quantum chromodynamics. Two specific examples corresponding to the S-coupling constant sum rule and the light quark masses sum rules are considered. An interpretation, within QCD, of Leutwyler's formula for the current algebra quark masses is also given

  9. Cumulative risk on the oxytocin receptor gene (OXTR) underpins empathic communication difficulties at the first stages of romantic love.

    Science.gov (United States)

    Schneiderman, Inna; Kanat-Maymon, Yaniv; Ebstein, Richard P; Feldman, Ruth

    2014-10-01

    Empathic communication between couples plays an important role in relationship quality and individual well-being and research has pointed to the role of oxytocin in providing the neurobiological substrate for pair-bonding and empathy. Here, we examined links between genetic variability on the oxytocin receptor gene (OXTR) and empathic behaviour at the initiation of romantic love. Allelic variations on five OXTR single nucleotide polymorphisms (SNPs) previously associated with susceptibility to disorders of social functioning were genotyped in 120 new lovers: OXTRrs13316193, rs2254298, rs1042778, rs2268494 and rs2268490. Cumulative genetic risk was computed by summing risk alleles on each SNP. Couples were observed in support-giving interaction and behaviour was coded for empathic communication, including affective congruence, maintaining focus on partner, acknowledging partner's distress, reciprocal exchange and non-verbal empathy. Hierarchical linear modelling indicated that individuals with high OXTR risk exhibited difficulties in empathic communication. OXTR risk predicted empathic difficulties above and beyond the couple level, relationship duration, and anxiety and depressive symptoms. Findings underscore the involvement of oxytocin in empathic behaviour during the early stages of social affiliation, and suggest the utility of cumulative risk and plasticity indices on the OXTR as potential biomarkers for research on disorders of social dysfunction and the neurobiology of empathy. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Wave functions constructed from an invariant sum over histories satisfy constraints

    International Nuclear Information System (INIS)

    Halliwell, J.J.; Hartle, J.B.

    1991-01-01

    Invariance of classical equations of motion under a group parametrized by functions of time implies constraints between canonical coordinates and momenta. In the Dirac formulation of quantum mechanics, invariance is normally imposed by demanding that physical wave functions are annihilated by the operator versions of these constraints. In the sum-over-histories quantum mechanics, however, wave functions are specified, directly, by appropriate functional integrals. It therefore becomes an interesting question whether the wave functions so specified obey the operator constraints of the Dirac theory. In this paper, we show for a wide class of theories, including gauge theories, general relativity, and first-quantized string theories, that wave functions constructed from a sum over histories are, in fact, annihilated by the constraints provided that the sum over histories is constructed in a manner which respects the invariance generated by the constraints. By this we mean a sum over histories defined with an invariant action, invariant measure, and an invariant class of paths summed over

  11. Symbolic methods for the evaluation of sum rules of Bessel functions

    International Nuclear Information System (INIS)

    Babusci, D.; Dattoli, G.; Górska, K.; Penson, K. A.

    2013-01-01

    The use of the umbral formalism allows a significant simplification of the derivation of sum rules involving products of special functions and polynomials. We rederive in this way known sum rules and addition theorems for Bessel functions. Furthermore, we obtain a set of new closed form sum rules involving various special polynomials and Bessel functions. The examples we consider are relevant for applications ranging from plasma physics to quantum optics

  12. Generalized harmonic, cyclotomic, and binomial sums, their polylogarithms and special numbers

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Bluemlein, J. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-10-15

    A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.

  13. Generalized harmonic, cyclotomic, and binomial sums, their polylogarithms and special numbers

    International Nuclear Information System (INIS)

    Ablinger, J.; Schneider, C.; Bluemlein, J.

    2013-10-01

    A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.

  14. Renormalization group summation of Laplace QCD sum rules for scalar gluon currents

    Directory of Open Access Journals (Sweden)

    Farrukh Chishtie

    2016-03-01

    Full Text Available We employ renormalization group (RG summation techniques to obtain portions of Laplace QCD sum rules for scalar gluon currents beyond the order to which they have been explicitly calculated. The first two of these sum rules are considered in some detail, and it is shown that they have significantly less dependence on the renormalization scale parameter μ2 once the RG summation is used to extend the perturbative results. Using the sum rules, we then compute the bound on the scalar glueball mass and demonstrate that the 3 and 4-Loop perturbative results form lower and upper bounds to their RG summed counterparts. We further demonstrate improved convergence of the RG summed expressions with respect to perturbative results.

  15. Sum-Trigger-II status and prospective physics

    Energy Technology Data Exchange (ETDEWEB)

    Dazzi, Francesco; Mirzoyan, Razmik; Schweizer, Thomas; Teshima, Masahiro [Max Planck Institut fuer Physik, Munich (Germany); Herranz, Diego; Lopez, Marcos [Universidad Complutense, Madrid (Spain); Mariotti, Mose [Universita degli Studi di Padova (Italy); Nakajima, Daisuke [The University of Tokio (Japan); Rodriguez Garcia, Jezabel [Max Planck Institut fuer Physik, Munich (Germany); Instituto Astrofisico de Canarias, Tenerife (Spain)

    2015-07-01

    MAGIC is a stereoscopic system of 2 Imaging Air Cherenkov Telescopes (IACTs) for very high energy gamma-ray astronomy, located at La Palma (Spain). Lowering the energy threshold of IACTs is crucial for the observation of Pulsars, high redshift AGNs and GRBs. A novel trigger strategy, based on the analogue sum of a patch of pixels, can lead to a lower threshold compared to conventional digital triggers. In the last years, a major upgrade of the MAGIC telescopes took place in order to optimize the performances, mainly in the low energy domain. The PMTs camera and the reflective surface of MAGIC-I, as well as both readout systems, have been deeply renovated. The last important milestone is the implementation of a new stereoscopic analogue trigger, dubbed Sum-Trigger-II. The installation successfully ended in 2014 and the first data set has been already taken. Currently the fine-tuning of the main parameters as well as the comparison with Monte Carlo studies is ongoing. In this talk the status of Sum-Trigger-II and the future prospective physics cases at very low energy are presented.

  16. B --> K$*\\gamma$ from hybrid sum rule

    CERN Document Server

    Narison, Stéphan

    1994-01-01

    Using the {\\it hybrid} moments-Laplace sum rule (HSR), which is well-defined for M_b \\rar \\infty, in contrast with the popular double Borel (Laplace) sum rule (DLSR), which blows up in this limit when applied to the heavy-to-light processes, we show that the form factor of the B \\rar K^* \\ \\gamma radiative transition is dominated by the light-quark condensate for M_b \\rar \\infty and behaves like \\sqrt M_b. The form factor is found to be F^{B\\rar K^*}_1(0) \\simeq (30.8 \\pm 1.3 \\pm 3.6 \\pm 0.6)\\times 10^{-2}, where the errors come respectively from the procedure in the sum rule analysis, the errors in the input and in the SU(3)_f-breaking parameters. This result leads to Br(B\\rar K^* \\ \\gamma) \\simeq (4.45 \\pm 1.12) \\times 10^{-5} in agreement with the recent CLEO data. Parametrization of the M_b-dependence of the form factor including the SU(3)_f-breaking effects is given in (26), which leads to F^{B\\rar K^*}_1(0)/ F^{B\\rar \\rho}_1(0) \\simeq (1.14 \\pm 0.02).

  17. Inequalities for finite trigonometric sums. An interplay: with some series related to harmonic numbers

    Directory of Open Access Journals (Sweden)

    Omran Kouba

    2016-07-01

    Full Text Available Abstract An interplay between the sum of certain series related to harmonic numbers and certain finite trigonometric sums is investigated. This allows us to express the sum of these series in terms of the considered trigonometric sums, and permits us to find sharp inequalities bounding these trigonometric sums. In particular, this answers positively an open problem of Chen (Excursions in Classical Analysis, 2010.

  18. An evaluation paradigm for cumulative impact analysis

    Science.gov (United States)

    Stakhiv, Eugene Z.

    1988-09-01

    Cumulative impact analysis is examined from a conceptual decision-making perspective, focusing on its implicit and explicit purposes as suggested within the policy and procedures for environmental impact analysis of the National Environmental Policy Act of 1969 (NEPA) and its implementing regulations. In this article it is also linked to different evaluation and decision-making conventions, contrasting a regulatory context with a comprehensive planning framework. The specific problems that make the application of cumulative impact analysis a virtually intractable evaluation requirement are discussed in connection with the federal regulation of wetlands uses. The relatively familiar US Army Corps of Engineers' (the Corps) permit program, in conjunction with the Environmental Protection Agency's (EPA) responsibilities in managing its share of the Section 404 regulatory program requirements, is used throughout as the realistic context for highlighting certain pragmatic evaluation aspects of cumulative impact assessment. To understand the purposes of cumulative impact analysis (CIA), a key distinction must be made between the implied comprehensive and multiobjective evaluation purposes of CIA, promoted through the principles and policies contained in NEPA, and the more commonly conducted and limited assessment of cumulative effects (ACE), which focuses largely on the ecological effects of human actions. Based on current evaluation practices within the Corps' and EPA's permit programs, it is shown that the commonly used screening approach to regulating wetlands uses is not compatible with the purposes of CIA, nor is the environmental impact statement (EIS) an appropriate vehicle for evaluating the variety of objectives and trade-offs needed as part of CIA. A heuristic model that incorporates the basic elements of CIA is developed, including the idea of trade-offs among social, economic, and environmental protection goals carried out within the context of environmental

  19. Chain hexagonal cacti with the extremal eccentric distance sum.

    Science.gov (United States)

    Qu, Hui; Yu, Guihai

    2014-01-01

    Eccentric distance sum (EDS), which can predict biological and physical properties, is a topological index based on the eccentricity of a graph. In this paper we characterize the chain hexagonal cactus with the minimal and the maximal eccentric distance sum among all chain hexagonal cacti of length n, respectively. Moreover, we present exact formulas for EDS of two types of hexagonal cacti.

  20. Direct and reverse inclusions for strongly multiple summing operators

    Indian Academy of Sciences (India)

    and strongly multiple summing operators under the assumption that the range has finite cotype. Keywords. .... multiple (q, p)-summing, if there exists a constant C ≥ 0 such that for every choice of systems (x j i j )1≤i j ≤m j ...... Ideals and their Applications in Theoretical Physics (1983) (Leipzig: Teubner-Texte) pp. 185–199.

  1. Path-sum calculations for rf current drive

    International Nuclear Information System (INIS)

    Belo, Jorge H.; Bizarro, Joao P.S.; Rodrigues, Paulo

    2001-01-01

    Path sums and Gaussian short-time propagators are used to solve two-dimensional Fokker-Planck models of lower-hybrid (LH) and electron-cyclotron (EC) current drive (CD), and are shown to be well suited to the two limiting situations where the rf quasilinear diffusion coefficient is either relatively small, D rf ≅0.1, or very large, D rf →∞, the latter case enabling a special treatment. Results are given for both LHCD and ECCD in the small D rf case, whereas the limiting situation is illustrated only for ECCD. To check the accuracy of path-sum calculations, comparisons with finite difference solutions are provided

  2. Ordering individuals with sum scores: the introduction of the nonparametric Rasch model

    NARCIS (Netherlands)

    Zwitser, R.J.; Maris, G.

    2016-01-01

    When a simple sum or number-correct score is used to evaluate the ability of individual testees, then, from an accountability perspective, the inferences based on the sum score should be the same as the inferences based on the complete response pattern. This requirement is fulfilled if the sum score

  3. Cumulative fatigue and creep-fatigue damage at 3500C on recrystallized zircaloy 4

    International Nuclear Information System (INIS)

    Brun, G.; Pelchat, J.; Floze, J.C.; Galimberti, M.

    1985-06-01

    An experimental programme undertaken by C.E.A., E.D.F. and FRAGEMA with the aim of characterizing the fatigue and creep fatigue behaviour of zircaloy-4 following annealing treatments (recrystallized, stress-delived) is in progress. The results given below concern only recrystallized material. Cyclic properties, low-cycle fatigue curves and creep behaviour laws under stresses have been established. Sequential tests of pure fatigue and creep-fatigue were performed. The cumulative life fractions at fracture depend on the sequence of leading, stress history and number of cycles of prestressing. The MINER's rule appears to be conservative with regard to a low-high loading sequence whereas it is not for the reverse high-low loading sequences. Fatigue and creep damage are not interchangeable. Pre-creep improves the fatigue resistance. Pre-fatigue improves the creep strength as long as the beneficial effect of cyclic hardening overcomes the damaging effect of surface cracking. The introduction of a tension hold time into the fatigue cycle slightly increases cyclic hardening and reduces the number of cycles to failure. For hold times of less than one hour, the sum of fatigue and creep life fractions is closed to one

  4. Maintenance hemodialysis patients have high cumulative radiation exposure.

    LENUS (Irish Health Repository)

    Kinsella, Sinead M

    2010-10-01

    Hemodialysis is associated with an increased risk of neoplasms which may result, at least in part, from exposure to ionizing radiation associated with frequent radiographic procedures. In order to estimate the average radiation exposure of those on hemodialysis, we conducted a retrospective study of 100 patients in a university-based dialysis unit followed for a median of 3.4 years. The number and type of radiological procedures were obtained from a central radiology database, and the cumulative effective radiation dose was calculated using standardized, procedure-specific radiation levels. The median annual radiation dose was 6.9 millisieverts (mSv) per patient-year. However, 14 patients had an annual cumulative effective radiation dose over 20 mSv, the upper averaged annual limit for occupational exposure. The median total cumulative effective radiation dose per patient over the study period was 21.7 mSv, in which 13 patients had a total cumulative effective radiation dose over 75 mSv, a value reported to be associated with a 7% increased risk of cancer-related mortality. Two-thirds of the total cumulative effective radiation dose was due to CT scanning. The average radiation exposure was significantly associated with the cause of end-stage renal disease, history of ischemic heart disease, transplant waitlist status, number of in-patient hospital days over follow-up, and death during the study period. These results highlight the substantial exposure to ionizing radiation in hemodialysis patients.

  5. Neutrino mass sum rules and symmetries of the mass matrix

    Energy Technology Data Exchange (ETDEWEB)

    Gehrlein, Julia [Karlsruhe Institute of Technology, Institut fuer Theoretische Teilchenphysik, Karlsruhe (Germany); Universidad Autonoma de Madrid, Departamento de Fisica Teorica, Madrid (Spain); Instituto de Fisica Teorica UAM/CSIC, Madrid (Spain); Spinrath, Martin [Karlsruhe Institute of Technology, Institut fuer Theoretische Teilchenphysik, Karlsruhe (Germany); National Center for Theoretical Sciences, Physics Division, Hsinchu (China)

    2017-05-15

    Neutrino mass sum rules have recently gained again more attention as a powerful tool to discriminate and test various flavour models in the near future. A related question which has not yet been discussed fully satisfactorily was the origin of these sum rules and if they are related to any residual or accidental symmetry. We will address this open issue here systematically and find previous statements confirmed. Namely, the sum rules are not related to any enhanced symmetry of the Lagrangian after family symmetry breaking but they are simply the result of a reduction of free parameters due to skillful model building. (orig.)

  6. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  7. Structural relations of harmonic sums and Mellin transformers at weight w=6

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes

    2009-01-15

    We derive the structural relations between nested harmonic sums and the corresponding Mellin transforms of Nielsen integrals and harmonic polylogarithms at weight w=6. They emerge in the calculations of massless single-scale quantities in QED and QCD, such as anomalous dimensions and Wilson coefficients, to 3- and 4-loop order. We consider the set of the multiple harmonic sums at weight six without index {l_brace}-1{r_brace}. This restriction is sufficient for all known physical cases. The structural relations supplement the algebraic relations, due to the shuffle product between harmonic sums, studied earlier. The original amount of 486 possible harmonic sums contributing at weight w=6 reduces to 99 sums with no index {l_brace}-1{r_brace}. Algebraic and structural relations lead to a further reduction to 20 basic functions. These functions supplement the set of 15 basic functions up to weight w=5 derived formerly. We line out an algorithm to obtain the analytic representation of the basic sums in the complex plane. (orig.)

  8. An electrophysiological signature of summed similarity in visual working memory

    NARCIS (Netherlands)

    Van Vugt, Marieke K.; Sekuler, Robert; Wilson, Hugh R.; Kahana, Michael J.

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory

  9. Spectral sum rule for time delay in R2

    International Nuclear Information System (INIS)

    Osborn, T.A.; Sinha, K.B.; Bolle, D.; Danneels, C.

    1985-01-01

    A local spectral sum rule for nonrelativistic scattering in two dimensions is derived for the potential class velement ofL 4 /sup // 3 (R 2 ). The sum rule relates the integral over all scattering energies of the trace of the time-delay operator for a finite region Σis contained inR 2 to the contributions in Σ of the pure point and singularly continuous spectra

  10. Cumulative effect in multiple production processes on nuclei

    International Nuclear Information System (INIS)

    Golubyatnikova, E.S.; Shmonin, V.L.; Kalinkin, B.N.

    1989-01-01

    It is shown that the cumulative effect is a natural result of the process of hadron multiple production in nuclear reactions. Interpretation is made of the universality of slopes of inclusive spectra and other characteristics of cumulative hadrons. The character of information from such reactions is discussed, which could be helpful in studying the mechanism of multiparticle production. 27 refs.; 4 figs

  11. Dichromatic State Sum Models for Four-Manifolds from Pivotal Functors

    Science.gov (United States)

    Bärenz, Manuel; Barrett, John

    2017-11-01

    A family of invariants of smooth, oriented four-dimensional manifolds is defined via handle decompositions and the Kirby calculus of framed link diagrams. The invariants are parametrised by a pivotal functor from a spherical fusion category into a ribbon fusion category. A state sum formula for the invariant is constructed via the chain-mail procedure, so a large class of topological state sum models can be expressed as link invariants. Most prominently, the Crane-Yetter state sum over an arbitrary ribbon fusion category is recovered, including the nonmodular case. It is shown that the Crane-Yetter invariant for nonmodular categories is stronger than signature and Euler invariant. A special case is the four-dimensional untwisted Dijkgraaf-Witten model. Derivations of state space dimensions of TQFTs arising from the state sum model agree with recent calculations of ground state degeneracies in Walker-Wang models. Relations to different approaches to quantum gravity such as Cartan geometry and teleparallel gravity are also discussed.

  12. Polarizability sum rules in QED

    International Nuclear Information System (INIS)

    Llanta, E.; Tarrach, R.

    1978-01-01

    The well founded total photoproduction and the, assumed subtraction free, longitudinal photoproduction polarizability sum rules are checked in QED at the lowest non-trivial order. The first one is shown to hold, whereas the second one turns out to need a subtraction, which makes its usefulness for determining the electromagnetic polarizabilities of the nucleons quite doubtful. (Auth.)

  13. A Bayesian analysis of the nucleon QCD sum rules

    International Nuclear Information System (INIS)

    Ohtani, Keisuke; Gubler, Philipp; Oka, Makoto

    2011-01-01

    QCD sum rules of the nucleon channel are reanalyzed, using the maximum-entropy method (MEM). This new approach, based on the Bayesian probability theory, does not restrict the spectral function to the usual ''pole + continuum'' form, allowing a more flexible investigation of the nucleon spectral function. Making use of this flexibility, we are able to investigate the spectral functions of various interpolating fields, finding that the nucleon ground state mainly couples to an operator containing a scalar diquark. Moreover, we formulate the Gaussian sum rule for the nucleon channel and find that it is more suitable for the MEM analysis to extract the nucleon pole in the region of its experimental value, while the Borel sum rule does not contain enough information to clearly separate the nucleon pole from the continuum. (orig.)

  14. Oscillating Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    In this chapter, we use the theory of summability of divergent series, presented earlier in Chap. 4, to derive the analogs of the Euler-Maclaurin summation formula for oscillating sums. These formulas will, in turn, be used to perform many remarkable deeds with ease. For instance, they can be used to derive analytic expressions for summable divergent series, obtain asymptotic expressions of oscillating series, and even accelerate the convergence of series by several orders of magnitude. Moreover, we will prove the notable fact that, as far as the foundational rules of summability calculus are concerned, summable divergent series behave exactly as if they were convergent.

  15. Oscillating Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    In this chapter, we use the theory of summability of divergent series, presented earlier in Chap. 4, to derive the analogs of the Euler-Maclaurin summation formula for oscillating sums. These formulas will, in turn, be used to perform many remarkable deeds with ease. For instance, they can be used to derive analytic expressions for summable divergent series, obtain asymptotic expressions of oscillating series, and even accelerate the convergence of series by several orders of magnitude. Moreover, we will prove the notable fact that, as far as the foundational rules of summability calculus are concerned, summable divergent series behave exactly as if they were convergent.

  16. Estimating a population cumulative incidence under calendar time trends

    DEFF Research Database (Denmark)

    Hansen, Stefan N; Overgaard, Morten; Andersen, Per K

    2017-01-01

    BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date....... It is common practice to apply the Kaplan-Meier or Aalen-Johansen estimator to the total sample and report either the estimated cumulative incidence curve or just a single point on the curve as a description of the disease risk. METHODS: We argue that, whenever the disease or disorder of interest is influenced...

  17. Geometric optimization and sums of algebraic functions

    KAUST Repository

    Vigneron, Antoine E.

    2014-01-01

    We present a new optimization technique that yields the first FPTAS for several geometric problems. These problems reduce to optimizing a sum of nonnegative, constant description complexity algebraic functions. We first give an FPTAS for optimizing such a sum of algebraic functions, and then we apply it to several geometric optimization problems. We obtain the first FPTAS for two fundamental geometric shape-matching problems in fixed dimension: maximizing the volume of overlap of two polyhedra under rigid motions and minimizing their symmetric difference. We obtain the first FPTAS for other problems in fixed dimension, such as computing an optimal ray in a weighted subdivision, finding the largest axially symmetric subset of a polyhedron, and computing minimum-area hulls.

  18. Chiral symmetry breaking parameters from QCD sum rules

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Karlsruhe Univ. (T.H.) (Germany, F.R.). Inst. fuer Theoretische Kernphysik; Bern Univ. (Switzerland). Inst. fuer Theoretische Physik)

    1982-10-04

    We obtain new QCD sum rules by considering vacuum expectation values of two-point functions, taking all the five quark bilinears into account. These sum rules are employed to extract values of different chiral symmetry breaking parameters in QCD theory. We find masses of light quarks, m=1/2msub(u)+msub(d)=8.4+-1.2 MeV, msub(s)=205+-65 MeV. Further, we obtain corrections to certain soft pion (kaon) PCAC relations and the violation of SU(3) flavour symmetry by the non-strange and strange quark-antiquark vacuum condensate.

  19. The Asymptotic Joint Distribution of Self-Normalized Censored Sums and Sums of Squares

    OpenAIRE

    Hahn, Marjorie G.; Kuelbs, Jim; Weiner, Daniel C.

    1990-01-01

    Empirical versions of appropriate centering and scale constants for random variables which can fail to have second or even first moments are obtainable in various ways via suitable modifications of the summands in the partial sum. This paper discusses a particular modification, called censoring (which is a kind of winsorization), where the (random) number of summands altered tends to infinity but the proportion altered tends to zero as the number of summands increases. Some analytic advantage...

  20. Sum formula for SL2 over imaginary quadratic number fields

    NARCIS (Netherlands)

    Lokvenec-Guleska, H.

    2004-01-01

    The subject of this thesis is generalization of the classical sum formula of Bruggeman and Kuznetsov to the upper half-space H3. The derivation of the preliminary sum formula involves computation of the inner product of two specially chosen Poincar´e series in two different ways: the spectral

  1. An exact formulation of the time-ordered exponential using path-sums

    International Nuclear Information System (INIS)

    Giscard, P.-L.; Lui, K.; Thwaite, S. J.; Jaksch, D.

    2015-01-01

    We present the path-sum formulation for the time-ordered exponential of a time-dependent matrix. The path-sum formulation gives the time-ordered exponential as a branched continued fraction of finite depth and breadth. The terms of the path-sum have an elementary interpretation as self-avoiding walks and self-avoiding polygons on a graph. Our result is based on a representation of the time-ordered exponential as the inverse of an operator, the mapping of this inverse to sums of walks on a graphs, and the algebraic structure of sets of walks. We give examples demonstrating our approach. We establish a super-exponential decay bound for the magnitude of the entries of the time-ordered exponential of sparse matrices. We give explicit results for matrices with commonly encountered sparse structures

  2. An exact formulation of the time-ordered exponential using path-sums

    Science.gov (United States)

    Giscard, P.-L.; Lui, K.; Thwaite, S. J.; Jaksch, D.

    2015-05-01

    We present the path-sum formulation for the time-ordered exponential of a time-dependent matrix. The path-sum formulation gives the time-ordered exponential as a branched continued fraction of finite depth and breadth. The terms of the path-sum have an elementary interpretation as self-avoiding walks and self-avoiding polygons on a graph. Our result is based on a representation of the time-ordered exponential as the inverse of an operator, the mapping of this inverse to sums of walks on a graphs, and the algebraic structure of sets of walks. We give examples demonstrating our approach. We establish a super-exponential decay bound for the magnitude of the entries of the time-ordered exponential of sparse matrices. We give explicit results for matrices with commonly encountered sparse structures.

  3. Forward Compton scattering with weak neutral current: Constraints from sum rules

    Directory of Open Access Journals (Sweden)

    Mikhail Gorchtein

    2015-07-01

    Full Text Available We generalize forward real Compton amplitude to the case of the interference of the electromagnetic and weak neutral current, formulate a low-energy theorem, relate the new amplitudes to the interference structure functions and obtain a new set of sum rules. We address a possible new sum rule that relates the product of the axial charge and magnetic moment of the nucleon to the 0th moment of the structure function g5(ν,0. For the dispersive γZ-box correction to the proton's weak charge, the application of the GDH sum rule allows us to reduce the uncertainty due to resonance contributions by a factor of two. The finite energy sum rule helps addressing the uncertainty in that calculation due to possible duality violations.

  4. A Framework for Treating Cumulative Trauma with Art Therapy

    Science.gov (United States)

    Naff, Kristina

    2014-01-01

    Cumulative trauma is relatively undocumented in art therapy practice, although there is growing evidence that art therapy provides distinct benefits for resolving various traumas. This qualitative study proposes an art therapy treatment framework for cumulative trauma derived from semi-structured interviews with three art therapists and artistic…

  5. Pentaquarks in QCD Sum Rule Approach

    International Nuclear Information System (INIS)

    Rodrigues da Silva, R.; Matheus, R.D.; Navarra, F.S.; Nielsen, M.

    2004-01-01

    We estimate the mass of recently observed pentaquak staes Ξ- (1862) and Θ+(1540) using two kinds of interpolating fields, containing two highly correlated diquarks, in the QCD sum rule approach. We obtained good agreement with the experimental value, using standard continuum threshold

  6. Cumulative Environmental Impacts: Science and Policy to Protect Communities.

    Science.gov (United States)

    Solomon, Gina M; Morello-Frosch, Rachel; Zeise, Lauren; Faust, John B

    2016-01-01

    Many communities are located near multiple sources of pollution, including current and former industrial sites, major roadways, and agricultural operations. Populations in such locations are predominantly low-income, with a large percentage of minorities and non-English speakers. These communities face challenges that can affect the health of their residents, including limited access to health care, a shortage of grocery stores, poor housing quality, and a lack of parks and open spaces. Environmental exposures may interact with social stressors, thereby worsening health outcomes. Age, genetic characteristics, and preexisting health conditions increase the risk of adverse health effects from exposure to pollutants. There are existing approaches for characterizing cumulative exposures, cumulative risks, and cumulative health impacts. Although such approaches have merit, they also have significant constraints. New developments in exposure monitoring, mapping, toxicology, and epidemiology, especially when informed by community participation, have the potential to advance the science on cumulative impacts and to improve decision making.

  7. Baltic Sea biodiversity status vs. cumulative human pressures

    DEFF Research Database (Denmark)

    Andersen, Jesper H.; Halpern, Benjamin S.; Korpinen, Samuli

    2015-01-01

    Abstract Many studies have tried to explain spatial and temporal variations in biodiversity status of marine areas from a single-issue perspective, such as fishing pressure or coastal pollution, yet most continental seas experience a wide range of human pressures. Cumulative impact assessments have...... been developed to capture the consequences of multiple stressors for biodiversity, but the ability of these assessments to accurately predict biodiversity status has never been tested or ground-truthed. This relationship has similarly been assumed for the Baltic Sea, especially in areas with impaired...... status, but has also never been documented. Here we provide a first tentative indication that cumulative human impacts relate to ecosystem condition, i.e. biodiversity status, in the Baltic Sea. Thus, cumulative impact assessments offer a promising tool for informed marine spatial planning, designation...

  8. Conceptual models for cumulative risk assessment.

    Science.gov (United States)

    Linder, Stephen H; Sexton, Ken

    2011-12-01

    In the absence of scientific consensus on an appropriate theoretical framework, cumulative risk assessment and related research have relied on speculative conceptual models. We argue for the importance of theoretical backing for such models and discuss 3 relevant theoretical frameworks, each supporting a distinctive "family" of models. Social determinant models postulate that unequal health outcomes are caused by structural inequalities; health disparity models envision social and contextual factors acting through individual behaviors and biological mechanisms; and multiple stressor models incorporate environmental agents, emphasizing the intermediary role of these and other stressors. The conclusion is that more careful reliance on established frameworks will lead directly to improvements in characterizing cumulative risk burdens and accounting for disproportionate adverse health effects.

  9. Semiempirical search for oxide superconductors based on bond valence sums

    International Nuclear Information System (INIS)

    Tanaka, S.; Fukushima, N.; Niu, H.; Ando, K.

    1992-01-01

    Relationships between crystal structures and electronic states of layered transition-metal oxides are analyzed in the light of bond valence sums. Correlations between the superconducting transition temperature T c and the bond-valence-sum parameters are investigated for the high-T c cuprate compounds. Possibility of making nonsuperconducting oxides superconducting is discussed. (orig.)

  10. Closed-form summations of Dowker's and related trigonometric sums

    Science.gov (United States)

    Cvijović, Djurdje; Srivastava, H. M.

    2012-09-01

    Through a unified and relatively simple approach which uses complex contour integrals, particularly convenient integration contours and calculus of residues, closed-form summation formulas for 12 very general families of trigonometric sums are deduced. One of them is a family of cosecant sums which was first summed in closed form in a series of papers by Dowker (1987 Phys. Rev. D 36 3095-101 1989 J. Math. Phys. 30 770-3 1992 J. Phys. A: Math. Gen. 25 2641-8), whose method has inspired our work in this area. All of the formulas derived here involve the higher-order Bernoulli polynomials. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’.

  11. Quark-spin isospin sum rules and the Adler-Weisberger relation in nuclei

    International Nuclear Information System (INIS)

    Delorme, J.; Ericson, M.

    1982-01-01

    We use a quark model to extend the classical Gamow-Teller sum rule for the difference of the β - and β + strengths to excitations of the nucleon (mainly the Δ isobar). A schematic model illustrates the realization of the new sum rule when a particle-hole force is introduced. We discuss the connection of our result with the model-independent Adler-Weisberger sum rule. (orig.)

  12. Summing skyrmions

    International Nuclear Information System (INIS)

    Jackson, A.D.; Weiss, C.; Wirzba, A.

    1990-01-01

    The Skyrme model has the same high density behavior as a free quark gas. However, the inclusion of higher-order terms spoils this agreement. We consider the all-order sum of a class of chiral invariant Lagrangians of even order in L μ suggested by Marleau. We prove Marleau's conjecture that these terms are of second order in the derivatives of the chiral angle for the hedgehog case and show the terms are unique under the additional condition that, for each order, the identity map on the 3-sphere S 3 (L) is a solution. The general form of the summation can be restricted by physical constraints leading to stable results. Under the assumption that the Lagrangian scales like the non-linear sigma model at low densities and like the free quark gas at high densities, we prove that a chiral phase transition must occur. (orig.)

  13. Comment on QCD sum rules and weak bottom decays

    International Nuclear Information System (INIS)

    Guberina, B.; Machet, B.

    1982-07-01

    QCD sum rules derived by Bourrely et al. are applied to B-decays to get a lower and an upper bound for the decay rate. The sum rules are shown to be essentially controlled by the large mass scales involved in the process. These bounds combined with the experimental value of BR (B→eνX) provide an upper bound for the lifetime of the B + meson. A comparison is made with D-meson decays

  14. On the general Dedekind sums and its reciprocity formula

    Indian Academy of Sciences (India)

    if x is an integer. The various properties of S(h, q) were investigated by many authors. Maybe the most famous property of Dedekind sums is the reciprocity formula (see [2–4]):. S(h, q) + S(q, h) = h2 + q2 + 1. 12hq. −. 1. 4. (1) for all (h, q) = 1,q > 0,h> 0. The main purpose of this paper is to introduce a general. Dedekind sum:.

  15. Examination of cumulative effects of early adolescent depression on cannabis and alcohol use disorder in late adolescence in a community-based cohort.

    Science.gov (United States)

    Rhew, Isaac C; Fleming, Charles B; Vander Stoep, Ann; Nicodimos, Semret; Zheng, Cheng; McCauley, Elizabeth

    2017-11-01

    Although they often co-occur, the longitudinal relationship between depression and substance use disorders during adolescence remains unclear. This study estimated the effects of cumulative depression during early adolescence (ages 13-15 years) on the likelihood of cannabis use disorder (CUD) and alcohol use disorder (AUD) at age 18. Prospective cohort study of youth assessed at least annually between 6th and 9th grades (~ age 12-15) and again at age 18. Marginal structural models based on a counterfactual framework that accounted for both potential fixed and time-varying confounders were used to estimate cumulative effects of depressive symptoms over early adolescence. The sample originated from four public middle schools in Seattle, Washington, USA. The sample consisted of 521 youth (48.4% female; 44.5% were non-Hispanic White). Structured in-person interviews with youth and their parents were conducted to assess diagnostic symptom counts of depression during early adolescence; diagnoses of CUD and AUD at age 18 was based the Voice-Diagnostic Interview Schedule for Children. Cumulative depression was defined as the sum of depression symptom counts from grades 7-9. The past-year prevalence of cannabis and alcohol use disorder at the age 18 study wave was 20.9 and 19.8%, respectively. A 1 standard deviation increase in cumulative depression during early adolescence was associated with a 50% higher likelihood of CUD [prevalence ratio (PR) = 1.50; 95% confidence interval (CI) = 1.07, 2.10]. Although similar in direction, there was no statistically significant association between depression and AUD (PR = 1.41; 95% CI = 0.94, 2.11). Further, there were no differences in associations according to gender. Youth with more chronic or severe forms of depression during early adolescence may be at elevated risk for developing cannabis use disorder compared with otherwise similar youth who experience fewer depressive symptoms during early adolescence. © 2017 Society

  16. Cumulative query method for influenza surveillance using search engine data.

    Science.gov (United States)

    Seo, Dong-Woo; Jo, Min-Woo; Sohn, Chang Hwan; Shin, Soo-Yong; Lee, JaeHo; Yu, Maengsoo; Kim, Won Young; Lim, Kyoung Soo; Lee, Sang-Il

    2014-12-16

    Internet search queries have become an important data source in syndromic surveillance system. However, there is currently no syndromic surveillance system using Internet search query data in South Korea. The objective of this study was to examine correlations between our cumulative query method and national influenza surveillance data. Our study was based on the local search engine, Daum (approximately 25% market share), and influenza-like illness (ILI) data from the Korea Centers for Disease Control and Prevention. A quota sampling survey was conducted with 200 participants to obtain popular queries. We divided the study period into two sets: Set 1 (the 2009/10 epidemiological year for development set 1 and 2010/11 for validation set 1) and Set 2 (2010/11 for development Set 2 and 2011/12 for validation Set 2). Pearson's correlation coefficients were calculated between the Daum data and the ILI data for the development set. We selected the combined queries for which the correlation coefficients were .7 or higher and listed them in descending order. Then, we created a cumulative query method n representing the number of cumulative combined queries in descending order of the correlation coefficient. In validation set 1, 13 cumulative query methods were applied, and 8 had higher correlation coefficients (min=.916, max=.943) than that of the highest single combined query. Further, 11 of 13 cumulative query methods had an r value of ≥.7, but 4 of 13 combined queries had an r value of ≥.7. In validation set 2, 8 of 15 cumulative query methods showed higher correlation coefficients (min=.975, max=.987) than that of the highest single combined query. All 15 cumulative query methods had an r value of ≥.7, but 6 of 15 combined queries had an r value of ≥.7. Cumulative query method showed relatively higher correlation with national influenza surveillance data than combined queries in the development and validation set.

  17. Limiting law excess sum rule for polyelectrolytes.

    Science.gov (United States)

    Landy, Jonathan; Lee, YongJin; Jho, YongSeok

    2013-11-01

    We revisit the mean-field limiting law screening excess sum rule that holds for rodlike polyelectrolytes. We present an efficient derivation of this law that clarifies its region of applicability: The law holds in the limit of small polymer radius, measured relative to the Debye screening length. From the limiting law, we determine the individual ion excess values for single-salt electrolytes. We also consider the mean-field excess sum away from the limiting region, and we relate this quantity to the osmotic pressure of a dilute polyelectrolyte solution. Finally, we consider numerical simulations of many-body polymer-electrolyte solutions. We conclude that the limiting law often accurately describes the screening of physical charged polymers of interest, such as extended DNA.

  18. The challenges and opportunities in cumulative effects assessment

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Melissa M., E-mail: mfoley@usgs.gov [U.S. Geological Survey, Pacific Coastal and Marine Science Center, 400 Natural Bridges, Dr., Santa Cruz, CA 95060 (United States); Center for Ocean Solutions, Stanford University, 99 Pacific St., Monterey, CA 93940 (United States); Mease, Lindley A., E-mail: lamease@stanford.edu [Center for Ocean Solutions, Stanford University, 473 Via Ortega, Stanford, CA 94305 (United States); Martone, Rebecca G., E-mail: rmartone@stanford.edu [Center for Ocean Solutions, Stanford University, 99 Pacific St., Monterey, CA 93940 (United States); Prahler, Erin E. [Center for Ocean Solutions, Stanford University, 473 Via Ortega, Stanford, CA 94305 (United States); Morrison, Tiffany H., E-mail: tiffany.morrison@jcu.edu.au [ARC Centre of Excellence for Coral Reef Studies, James Cook University, Townsville, QLD, 4811 (Australia); Murray, Cathryn Clarke, E-mail: cmurray@pices.int [WWF-Canada, 409 Granville Street, Suite 1588, Vancouver, BC V6C 1T2 (Canada); Wojcik, Deborah, E-mail: deb.wojcik@duke.edu [Nicholas School for the Environment, Duke University, 9 Circuit Dr., Durham, NC 27708 (United States)

    2017-01-15

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  19. The challenges and opportunities in cumulative effects assessment

    International Nuclear Information System (INIS)

    Foley, Melissa M.; Mease, Lindley A.; Martone, Rebecca G.; Prahler, Erin E.; Morrison, Tiffany H.; Murray, Cathryn Clarke; Wojcik, Deborah

    2017-01-01

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  20. The challenges and opportunities in cumulative effects assessment

    Science.gov (United States)

    Foley, Melissa M.; Mease, Lindley A; Martone, Rebecca G; Prahler, Erin E; Morrison, Tiffany H; Clarke Murray, Cathryn; Wojcik, Deborah

    2016-01-01

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  1. Lindhard's polarization parameter and atomic sum rules in the local plasma approximation

    DEFF Research Database (Denmark)

    Cabrera-Trujillo, R.; Apell, P.; Oddershede, J.

    2017-01-01

    In this work, we analyze the effects of Lindhard polarization parameter, χ, on the sum rule, Sp, within the local plasma approximation (LPA) as well as on the logarithmic sum rule Lp = dSp/dp, in both cases for the system in an initial excited state. We show results for a hydrogenic atom with nuc......In this work, we analyze the effects of Lindhard polarization parameter, χ, on the sum rule, Sp, within the local plasma approximation (LPA) as well as on the logarithmic sum rule Lp = dSp/dp, in both cases for the system in an initial excited state. We show results for a hydrogenic atom...... in terms of a screened charge Z* for the ground state. Our study shows that by increasing χ, the sum rule for p0 it increases, and the value p=0 provides the normalization/closure relation which remains fixed to the number of electrons for the same initial state. When p is fixed...

  2. An Algorithm to Solve the Equal-Sum-Product Problem

    OpenAIRE

    Nyblom, M. A.; Evans, C. D.

    2013-01-01

    A recursive algorithm is constructed which finds all solutions to a class of Diophantine equations connected to the problem of determining ordered n-tuples of positive integers satisfying the property that their sum is equal to their product. An examination of the use of Binary Search Trees in implementing the algorithm into a working program is given. In addition an application of the algorithm for searching possible extra exceptional values of the equal-sum-product problem is explored after...

  3. A Quantum Approach to Subset-Sum and Similar Problems

    OpenAIRE

    Daskin, Ammar

    2017-01-01

    In this paper, we study the subset-sum problem by using a quantum heuristic approach similar to the verification circuit of quantum Arthur-Merlin games. Under described certain assumptions, we show that the exact solution of the subset sum problem my be obtained in polynomial time and the exponential speed-up over the classical algorithms may be possible. We give a numerical example and discuss the complexity of the approach and its further application to the knapsack problem.

  4. Managing regional cumulative effects of oil sands development in Alberta, Canada

    International Nuclear Information System (INIS)

    Spaling, H.; Zwier, J.

    2000-01-01

    This paper demonstrates an approach to regional cumulative effects management using the case of oil sands development in Alberta, Canada. The 17 existing, approved, or planned projects, all concentrated in a relatively small region, pose significant challenges for conducting and reviewing cumulative effects assessment (CEA) on a project-by-project basis. In response, stakeholders have initiated a regional cumulative effects management system that is among the first such initiatives anywhere. Advantages of this system include (1) more efficient gathering and sharing of information, including a common regional database, (2) setting acceptable regional environmental thresholds for all projects, (3) collaborative assessment of similar cumulative effects from related projects, (4) co-ordinated regulatory review and approval process for overlapping CEAs, and (5) institutional empowerment from a Regional Sustainable Development Strategy administered by a public authority. This case provides a model for integrating project-based CEA with regional management of cumulative effects. (author)

  5. Closed-form summations of Dowker's and related trigonometric sums

    International Nuclear Information System (INIS)

    Cvijović, Djurdje; Srivastava, H M

    2012-01-01

    Through a unified and relatively simple approach which uses complex contour integrals, particularly convenient integration contours and calculus of residues, closed-form summation formulas for 12 very general families of trigonometric sums are deduced. One of them is a family of cosecant sums which was first summed in closed form in a series of papers by Dowker (1987 Phys. Rev. D 36 3095–101; 1989 J. Math. Phys. 30 770–3; 1992 J. Phys. A: Math. Gen. 25 2641–8), whose method has inspired our work in this area. All of the formulas derived here involve the higher-order Bernoulli polynomials. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  6. Power loss analysis in altered tooth-sum spur gearing

    Directory of Open Access Journals (Sweden)

    Sachidananda H. K.

    2018-01-01

    Full Text Available The main cause of power loss or dissipation of heat in case of meshed gears is due to friction existing between gear tooth mesh and is a major concern in low rotational speed gears, whereas in case of high operating speed the power loss taking place due to compression of air-lubricant mixture (churning losses and windage losses due to aerodynamic trial of air lubricant mixture which controls the total efficiency needs to be considered. Therefore, in order to improve mechanical efficiency it is necessary for gear designer during gear tooth optimization to consider these energy losses. In this research paper the power loss analysis for a tooth-sum of 100 altered by ±4% operating between a specified center distance is considered. The results show that negative altered tooth-sum gearing performs better as compared to standard and positive altered tooth-sum gearing.

  7. Ramanujan sums via generalized Möbius functions and applications

    Directory of Open Access Journals (Sweden)

    Vichian Laohakosol

    2006-01-01

    Full Text Available A generalized Ramanujan sum (GRS is defined by replacing the usual Möbius function in the classical Ramanujan sum with the Souriau-Hsu-Möbius function. After collecting basic properties of a GRS, mostly containing existing ones, seven aspects of a GRS are studied. The first shows that the unique representation of even functions with respect to GRSs is possible. The second is a derivation of the mean value of a GRS. The third establishes analogues of the remarkable Ramanujan's formulae connecting divisor functions with Ramanujan sums. The fourth gives a formula for the inverse of a GRS. The fifth is an analysis showing when a reciprocity law exists. The sixth treats the problem of dependence. Finally, some characterizations of completely multiplicative function using GRSs are obtained and a connection of a GRS with the number of solutions of certain congruences is indicated.

  8. Cumulative effects of planned industrial development and climate change on marine ecosystems

    Directory of Open Access Journals (Sweden)

    Cathryn Clarke Murray

    2015-07-01

    Full Text Available With increasing human population, large scale climate changes, and the interaction of multiple stressors, understanding cumulative effects on marine ecosystems is increasingly important. Two major drivers of change in coastal and marine ecosystems are industrial developments with acute impacts on local ecosystems, and global climate change stressors with widespread impacts. We conducted a cumulative effects mapping analysis of the marine waters of British Columbia, Canada, under different scenarios: climate change and planned developments. At the coast-wide scale, climate change drove the largest change in cumulative effects with both widespread impacts and high vulnerability scores. Where the impacts of planned developments occur, planned industrial and pipeline activities had high cumulative effects, but the footprint of these effects was comparatively localized. Nearshore habitats were at greatest risk from planned industrial and pipeline activities; in particular, the impacts of planned pipelines on rocky intertidal habitats were predicted to cause the highest change in cumulative effects. This method of incorporating planned industrial development in cumulative effects mapping allows explicit comparison of different scenarios with the potential to be used in environmental impact assessments at various scales. Its use allows resource managers to consider cumulative effect hotspots when making decisions regarding industrial developments and avoid unacceptable cumulative effects. Management needs to consider both global and local stressors in managing marine ecosystems for the protection of biodiversity and the provisioning of ecosystem services.

  9. The effects of cumulative practice on mathematics problem solving.

    Science.gov (United States)

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.

  10. Magnetic susceptibility and M1 transitions in /sup 208/Pb. [Sum rules

    Energy Technology Data Exchange (ETDEWEB)

    Traini, M; Lipparini, E; Orlandini, G; Stringari, S [Dipartimento di Matematica e Fisica, Universita di Trento, Italy

    1979-04-16

    M1 transitions in /sup 208/Pb are studied by evaluating energy-weighted and inverse energy-weighted sum-rules. The role of the nuclear interaction is widely discussed. It is shown that the nuclear potential increases the energy-weighted sum rule and lowers the inverse energy-weighted sum rule, with respect to the prediction of the pure shell model. Values of strengths and excitation energies are compared with experimental results and other theoretical calculations.

  11. Sum rules for the real parts of nonforward current-particle scattering amplitudes

    International Nuclear Information System (INIS)

    Abdel-Rahman, A.M.M.

    1976-01-01

    Extending previous work, using Taha's refined infinite-momentum method, new sum rules for the real parts of nonforward current-particle scattering amplitudes are derived. The sum rules are based on covariance, casuality, scaling, equal-time algebra and unsubtracted dispersion relations for the amplitudes. A comparison with the corresponding light-cone approach is made, and it is shown that the light-cone sum rules would also follow from the assumptions underlying the present work

  12. Sum rule approach to nuclear vibrations

    International Nuclear Information System (INIS)

    Suzuki, T.

    1983-01-01

    Velocity field of various collective states is explored by using sum rules for the nuclear current. It is shown that an irrotational and incompressible flow model is applicable to giant resonance states. Structure of the hydrodynamical states is discussed according to Tomonaga's microscopic theory for collective motions. (author)

  13. Super-Resolution Algorithm in Cumulative Virtual Blanking

    Science.gov (United States)

    Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.

    2008-11-01

    The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.

  14. QCD sum rule studies at finite density and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Youngshin

    2010-01-21

    In-medium modifications of hadronic properties have a strong connection to the restoration of chiral symmetry in hot and/or dense medium. The in-medium spectral functions for vector and axial-vector mesons are of particular interest in this context, considering the experimental dilepton production data which signal the in-medium meson properties. In this thesis, finite energy sum rules are employed to set constraints for the in-medium spectral functions of vector and axial-vector mesons. Finite energy sum rules for the first two moments of the spectral functions are investigated with emphasis on the role of a scale parameter related to the spontaneous chiral symmetry breaking in QCD. It is demonstrated that these lowest moments of vector current spectral functions do permit an accurate sum rule analysis with controlled inputs, such as the QCD condensates of lowest dimensions. In contrast, the higher moments contain uncertainties from the higher dimensional condensates. It turns out that the factorization approximation for the four-quark condensate is not applicable in any of the cases studied in this work. The accurate sum rules for the lowest two moments of the spectral functions are used to clarify and classify the properties of vector meson spectral functions in a nuclear medium. Possible connections with the Brown-Rho scaling hypothesis are also discussed. (orig.)

  15. Analysis of LDPE-ZnO-clay nanocomposites using novel cumulative rheological parameters

    Science.gov (United States)

    Kracalik, Milan

    2017-05-01

    Polymer nanocomposites exhibit complex rheological behaviour due to physical and also possibly chemical interactions between individual phases. Up to now, rheology of dispersive polymer systems has been usually described by evaluation of viscosity curve (shear thinning phenomenon), storage modulus curve (formation of secondary plateau) or plotting information about dumping behaviour (e.g. Van Gurp-Palmen-plot, comparison of loss factor tan δ). On the contrary to evaluation of damping behaviour, values of cot δ were calculated and called as "storage factor", analogically to loss factor. Then values of storage factor were integrated over specific frequency range and called as "cumulative storage factor". In this contribution, LDPE-ZnO-clay nanocomposites with different dispersion grades (physical networks) have been prepared and characterized by both conventional as well as novel analysis approach. Next to cumulative storage factor, further cumulative rheological parameters like cumulative complex viscosity, cumulative complex modulus or cumulative storage modulus have been introduced.

  16. A sum rule description of giant resonances at finite temperature

    International Nuclear Information System (INIS)

    Meyer, J.; Quentin, P.; Brack, M.

    1983-01-01

    A generalization of the sum rule approach to collective motion at finite temperature is presented. The m 1 and msub(-1) sum rules for the isovector dipole and the isoscalar monopole electric modes have been evaluated with the modified SkM force for the 208 Pb nucleus. The variation of the resulting giant resonance energies with temperature is discussed. (orig.)

  17. Root and Critical Point Behaviors of Certain Sums of Polynomials

    Indian Academy of Sciences (India)

    13

    There is an extensive literature concerning roots of sums of polynomials. Many papers and books([5], [6],. [7]) have written about these polynomials. Perhaps the most immediate question of sums of polynomials,. A + B = C, is “given bounds for the roots of A and B, what bounds can be given for the roots of C?” By. Fell [3], if ...

  18. Sum rules for baryonic vertex functions and the proton wave function in QCD

    International Nuclear Information System (INIS)

    Lavelle, M.J.

    1985-01-01

    We consider light-cone sum rules for vertex functions involving baryon-meson couplings. These sum rules relate the non-perturbative, and experimentally known, coupling constants to the moments of the wave function of the proton state. Our results for these moments are consistent with those obtained from QCD sum rules for two-point functions. (orig.)

  19. An Efficient Algorithm to Calculate the Minkowski Sum of Convex 3D Polyhedra

    NARCIS (Netherlands)

    Bekker, Henk; Roerdink, Jos B.T.M.

    2001-01-01

    A new method is presented to calculate the Minkowski sum of two convex polyhedra A and B in 3D. These graphs are given edge attributes. From these attributed graphs the attributed graph of the Minkowski sum is constructed. This graph is then transformed into the Minkowski sum of A and B. The running

  20. Sum rules for the spontaneous chiral symmetry breaking parameters of QCD

    International Nuclear Information System (INIS)

    Craigie, N.S.; Stern, J.

    1981-03-01

    We discuss in the spirit of the work of Shifman, Vainshtein and Zakharov (SVZ), sum rules involving current-current vacuum correlation functions, whose Wilson expansion starts off with the operators anti qq or (anti qq) 2 , and thus provide information about the chiral symmetry breaking parameters of QCD. We point out that under the type of crude approximations made by SVZ, a value of sub(vac) (250MeV) 3 is obtained from one of these sum rules, in agreement with current expectations. Further we show that a Borel transformed version of the Weinberg sum rule, for VV - AA, current products seem only to make sense for an A 1 mass close to 1.3GeV and it makes little sense with the current algebra mass Msub(A)=anti 2M. We also give an estimate for the chiral symmetry breaking parameters μ 1 6 =2 2 (anti qsub(L) lambda sup(a)γsub(μ)qsub(L))(anti qsub(R) lambdasup(a) γsup(μ)qsub(R)) >sub(vac) entering in the Weinberg sum rules and μ 2 6 =g 2 sub(vac) entering in a new sum rule we propose involving antisymmetric tensor currents J=anti q σsub(μnu) q. (author)

  1. Status of the new Sum-Trigger system for the MAGIC telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Garcia, Jezabel; Schweizer, Thomas; Nakajima, Daisuke [Max Planck Institute for Physics, Muenchen (Germany); Dazzi, Francesco [Dipartimento di Fisica dell' Universita di Udine (Italy); INFN, sez. di Trieste (Italy)

    2013-07-01

    MAGIC is a stereoscopic system of two 17 meters Imaging Air Cherenkov Telescopes for gamma-ray astronomy operating in stereo mode. The telescopes are located at about 2.200 metres above sea level in the Observatorio del Roque de los Muchachos (ORM), in the Canary island of La Palma. Lowering the energy threshold of Cherenkov Telescopes is crucial for the observation of Pulsars, High redshift AGNs and GRBs. The Sum-Trigger, based on the analogue sum of a patch of pixels has a lower threshold compared to conventional digital triggers. The Sum-Trigger principle has been proven experimentally in 2007 by decreasing the energy threshold of the first Magic telescope (Back then operating in mono mode) from 55 GeV down to 25 GeV. The first VHE detection for the Crab Pulsar was achieved due to this low threshold. After the upgrade of the MAGIC I and MAGIC II cameras and readout systems, we are planning to install a new Sum-Trigger system in both telescopes in Summer 2013. This trigger system will be operated for the first time in stereo mode. At the conference we report about the status and the performance of the new Sum-Trigger-II system.

  2. The black hole interior and a curious sum rule

    International Nuclear Information System (INIS)

    Giveon, Amit; Itzhaki, Nissan; Troost, Jan

    2014-01-01

    We analyze the Euclidean geometry near non-extremal NS5-branes in string theory, including regions beyond the horizon and beyond the singularity of the black brane. The various regions have an exact description in string theory, in terms of cigar, trumpet and negative level minimal model conformal field theories. We study the worldsheet elliptic genera of these three superconformal theories, and show that their sum vanishes. We speculate on the significance of this curious sum rule for black hole physics

  3. Dispersion relations and sum rules for natural optical activity

    International Nuclear Information System (INIS)

    Thomaz, M.T.; Nussenzveig, H.M.

    1981-06-01

    Dispersion relations and sum rules are derived for the complex rotatory power of an arbitrary linear (nonmagnetic) isotropic medium showing natural optical activity. Both previously known dispersion relations and sum rules as well as new ones are obtained. It is shown that the Rosenfeld-Condon dispersion formula is inconsistent with the expected asymptotic behavior at high frequencies. A new dispersion formula based on quantum eletro-dynamics removes this inconsistency; however, it still requires modification in the low-frequency limit. (Author) [pt

  4. The black hole interior and a curious sum rule

    Energy Technology Data Exchange (ETDEWEB)

    Giveon, Amit [Racah Institute of Physics, The Hebrew University,Jerusalem, 91904 (Israel); Itzhaki, Nissan [Physics Department, Tel-Aviv University,Ramat-Aviv, 69978 (Israel); Troost, Jan [Laboratoire de Physique Théorique,Unité Mixte du CRNS et de l’École Normale Supérieure,associée à l’Université Pierre et Marie Curie 6,UMR 8549 École Normale Supérieure,24 Rue Lhomond Paris 75005 (France)

    2014-03-12

    We analyze the Euclidean geometry near non-extremal NS5-branes in string theory, including regions beyond the horizon and beyond the singularity of the black brane. The various regions have an exact description in string theory, in terms of cigar, trumpet and negative level minimal model conformal field theories. We study the worldsheet elliptic genera of these three superconformal theories, and show that their sum vanishes. We speculate on the significance of this curious sum rule for black hole physics.

  5. Expansion formulae for characteristics of cumulative cost in finite horizon production models

    NARCIS (Netherlands)

    Ayhan, H.; Schlegel, S.

    2001-01-01

    We consider the expected value and the tail probability of cumulative shortage and holding cost (i.e. the probability that cumulative cost is more than a certain value) in finite horizon production models. An exact expression is provided for the expected value of the cumulative cost for general

  6. Cumulative Trauma Among Mayas Living in Southeast Florida.

    Science.gov (United States)

    Millender, Eugenia I; Lowe, John

    2017-06-01

    Mayas, having experienced genocide, exile, and severe poverty, are at high risk for the consequences of cumulative trauma that continually resurfaces through current fear of an uncertain future. Little is known about the mental health and alcohol use status of this population. This correlational study explored t/he relationship of cumulative trauma as it relates to social determinants of health (years in the United States, education, health insurance status, marital status, and employment), psychological health (depression symptoms), and health behaviors (alcohol use) of 102 Guatemalan Mayas living in Southeast Florida. The results of this study indicated that, as specific social determinants of health and cumulative trauma increased, depression symptoms (particularly among women) and the risk for harmful alcohol use (particularly among men) increased. Identifying risk factors at an early stage before serious disease or problems are manifest provides room for early screening leading to early identification, early treatment, and better outcomes.

  7. A Finer Classification of the Unit Sum Number of the Ring of Integers ...

    Indian Academy of Sciences (India)

    Here we introduce a finer classification for the unit sum number of a ring and in this new classification we completely determine the unit sum number of the ring of integers of a quadratic field. Further we obtain some results on cubic complex fields which one can decide whether the unit sum number is or ∞. Then we ...

  8. Sums of two-dimensional spectral triples

    DEFF Research Database (Denmark)

    Christensen, Erik; Ivan, Cristina

    2007-01-01

    construct a sum of two dimensional modules which reflects some aspects of the topological dimensions of the compact metric space, but this will only give the metric back approximately. At the end we make an explicit computation of the last module for the unit interval in. The metric is recovered exactly...

  9. Sums over geometries and improvements on the mean field approximation

    International Nuclear Information System (INIS)

    Sacksteder, Vincent E. IV

    2007-01-01

    The saddle points of a Lagrangian due to Efetov are analyzed. This Lagrangian was originally proposed as a tool for calculating systematic corrections to the Bethe approximation, a mean-field approximation which is important in statistical mechanics, glasses, coding theory, and combinatorial optimization. Detailed analysis shows that the trivial saddle point generates a sum over geometries reminiscent of dynamically triangulated quantum gravity, which suggests new possibilities to design sums over geometries for the specific purpose of obtaining improved mean-field approximations to D-dimensional theories. In the case of the Efetov theory, the dominant geometries are locally treelike, and the sum over geometries diverges in a way that is similar to quantum gravity's divergence when all topologies are included. Expertise from the field of dynamically triangulated quantum gravity about sums over geometries may be able to remedy these defects and fulfill the Efetov theory's original promise. The other saddle points of the Efetov Lagrangian are also analyzed; the Hessian at these points is nonnormal and pseudo-Hermitian, which is unusual for bosonic theories. The standard formula for Gaussian integrals is generalized to nonnormal kernels

  10. Limit theorems for multi-indexed sums of random variables

    CERN Document Server

    Klesov, Oleg

    2014-01-01

    Presenting the first unified treatment of limit theorems for multiple sums of independent random variables, this volume fills an important gap in the field. Several new results are introduced, even in the classical setting, as well as some new approaches that are simpler than those already established in the literature. In particular, new proofs of the strong law of large numbers and the Hajek-Renyi inequality are detailed. Applications of the described theory include Gibbs fields, spin glasses, polymer models, image analysis and random shapes. Limit theorems form the backbone of probability theory and statistical theory alike. The theory of multiple sums of random variables is a direct generalization of the classical study of limit theorems, whose importance and wide application in science is unquestionable. However, to date, the subject of multiple sums has only been treated in journals. The results described in this book will be of interest to advanced undergraduates, graduate students and researchers who ...

  11. Convergence problems of Coulomb and multipole sums in crystals

    International Nuclear Information System (INIS)

    Kholopov, Evgenii V

    2004-01-01

    Different ways of calculating Coulomb and dipole sums over crystal lattices are analyzed comparatively. It is shown that the currently alleged disagreement between various approaches originates in ignoring the requirement for the self-consistency of surface conditions, which are of fundamental importance due to the long-range nature of the bulk interactions that these sums describe. This is especially true of surfaces arising when direct sums for infinite translation-invariant structures are truncated. The charge conditions for actual surfaces being self-consistently adjusted to the bulk state are formally the same as those on the truncation surface, consistent with the concept of the thermodynamic limit for the bulk-state absolute equilibrium and with the fact that the surface energy contribution in this case is, naturally, statistically small compared to the bulk contribution. Two-point multipole expansions are briefly discussed, and the problems associated with the boundary of their convergence circle are pointed out. (reviews of topical problems)

  12. Origin of path independence between cumulative CO2 emissions and global warming

    Science.gov (United States)

    Seshadri, Ashwin K.

    2017-11-01

    Observations and GCMs exhibit approximate proportionality between cumulative carbon dioxide (CO2) emissions and global warming. Here we identify sufficient conditions for the relationship between cumulative CO2 emissions and global warming to be independent of the path of CO2 emissions; referred to as "path independence". Our starting point is a closed form expression for global warming in a two-box energy balance model (EBM), which depends explicitly on cumulative emissions, airborne fraction and time. Path independence requires that this function can be approximated as depending on cumulative emissions alone. We show that path independence arises from weak constraints, occurring if the timescale for changes in cumulative emissions (equal to ratio between cumulative emissions and emissions rate) is small compared to the timescale for changes in airborne fraction (which depends on CO2 uptake), and also small relative to a derived climate model parameter called the damping-timescale, which is related to the rate at which deep-ocean warming affects global warming. Effects of uncertainties in the climate model and carbon cycle are examined. Large deep-ocean heat capacity in the Earth system is not necessary for path independence, which appears resilient to climate modeling uncertainties. However long time-constants in the Earth system carbon cycle are essential, ensuring that airborne fraction changes slowly with timescale much longer than the timescale for changes in cumulative emissions. Therefore path independence between cumulative emissions and warming cannot arise for short-lived greenhouse gases.

  13. Sum Rules of Charm CP Asymmetries beyond the SU(3)_{F} Limit.

    Science.gov (United States)

    Müller, Sarah; Nierste, Ulrich; Schacht, Stefan

    2015-12-18

    We find new sum rules between direct CP asymmetries in D meson decays with coefficients that can be determined from a global fit to branching ratio data. Our sum rules eliminate the penguin topologies P and PA, which cannot be determined from branching ratios. In this way, we can make predictions about direct CP asymmetries in the standard model without ad hoc assumptions on the sizes of penguin diagrams. We consistently include first-order SU(3)_{F} breaking in the topological amplitudes extracted from the branching ratios. By confronting our sum rules with future precise data from LHCb and Belle II, one will identify or constrain new-physics contributions to P or PA. The first sum rule correlates the CP asymmetries a_{CP}^{dir} in D^{0}→K^{+}K^{-}, D^{0}→π^{+}π^{-}, and D^{0}→π^{0}π^{0}. We study the region of the a_{CP}^{dir}(D^{0}→π^{+}π^{-})-a_{CP}^{dir}(D^{0}→π^{0}π^{0}) plane allowed by current data and find that our sum rule excludes more than half of the allowed region at 95% C.L. Our second sum rule correlates the direct CP asymmetries in D^{+}→K[over ¯]^{0}K^{+}, D_{s}^{+}→K^{0}π^{+}, and D_{s}^{+}→K^{+}π^{0}.

  14. Cumulative effects of wind turbines. Volume 3: Report on results of consultations on cumulative effects of wind turbines on birds

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    This report gives details of the consultations held in developing the consensus approach taken in assessing the cumulative effects of wind turbines. Contributions on bird issues, and views of stakeholders, the Countryside Council for Wales, electric utilities, Scottish Natural Heritage, and the National Wind Power Association are reported. The scoping of key species groups, where cumulative effects might be expected, consideration of other developments, the significance of any adverse effects, mitigation, regional capacity assessments, and predictive models are discussed. Topics considered at two stakeholder workshops are outlined in the appendices.

  15. Higher order cumulants in colorless partonic plasma

    Energy Technology Data Exchange (ETDEWEB)

    Cherif, S. [Sciences and Technologies Department, University of Ghardaia, Ghardaia, Algiers (Algeria); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria); Ahmed, M. A. A. [Department of Physics, College of Science, Taibah University Al-Madinah Al-Mounawwarah KSA (Saudi Arabia); Department of Physics, Taiz University in Turba, Taiz (Yemen); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria); Ladrem, M., E-mail: mladrem@yahoo.fr [Department of Physics, College of Science, Taibah University Al-Madinah Al-Mounawwarah KSA (Saudi Arabia); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria)

    2016-06-10

    Any physical system considered to study the QCD deconfinement phase transition certainly has a finite volume, so the finite size effects are inevitably present. This renders the location of the phase transition and the determination of its order as an extremely difficult task, even in the simplest known cases. In order to identify and locate the colorless QCD deconfinement transition point in finite volume T{sub 0}(V), a new approach based on the finite-size cumulant expansion of the order parameter and the ℒ{sub m,n}-Method is used. We have shown that both cumulants of higher order and their ratios, associated to the thermodynamical fluctuations of the order parameter, in QCD deconfinement phase transition behave in a particular enough way revealing pronounced oscillations in the transition region. The sign structure and the oscillatory behavior of these in the vicinity of the deconfinement phase transition point might be a sensitive probe and may allow one to elucidate their relation to the QCD phase transition point. In the context of our model, we have shown that the finite volume transition point is always associated to the appearance of a particular point in whole higher order cumulants under consideration.

  16. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...

  17. Cumulative effects of forest management activities: how might they occur?

    Science.gov (United States)

    R. M. Rice; R. B. Thomas

    1985-01-01

    Concerns are often voiced about possible environmental damage as the result of the cumulative sedimentation effects of logging and forest road construction. In response to these concerns, National Forests are developing procedures to reduce the possibility that their activities may lead to unacceptable cumulative effects

  18. Using neural networks to represent potential surfaces as sums of products.

    Science.gov (United States)

    Manzhos, Sergei; Carrington, Tucker

    2006-11-21

    By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.

  19. Application of Higher-Order Cumulant in Fault Diagnosis of Rolling Bearing

    International Nuclear Information System (INIS)

    Shen, Yongjun; Yang, Shaopu; Wang, Junfeng

    2013-01-01

    In this paper a new method of pattern recognition based on higher-order cumulant and envelope analysis is presented. The core of this new method is to construct analytical signals from the given signals and obtain the envelope signals firstly, then compute and compare the higher-order cumulants of the envelope signals. The higher-order cumulants could be used as a characteristic quantity to distinguish these given signals. As an example, this method is applied in fault diagnosis for 197726 rolling bearing of freight locomotive. The comparisons of the second-order, third-order and fourth-order cumulants of the envelope signals from different vibration signals of rolling bearing show this new method could discriminate the normal and two fault signals distinctly

  20. Second harmonic generation and sum frequency generation

    International Nuclear Information System (INIS)

    Pellin, M.J.; Biwer, B.M.; Schauer, M.W.; Frye, J.M.; Gruen, D.M.

    1990-01-01

    Second harmonic generation and sum frequency generation are increasingly being used as in situ surface probes. These techniques are coherent and inherently surface sensitive by the nature of the mediums response to intense laser light. Here we will review these two techniques using aqueous corrosion as an example problem. Aqueous corrosion of technologically important materials such as Fe, Ni and Cr proceeds from a reduced metal surface with layer by layer growth of oxide films mitigated by compositional changes in the chemical makeup of the growing film. Passivation of the metal surface is achieved after growth of only a few tens of atomic layers of metal oxide. Surface Second Harmonic Generation and a related nonlinear laser technique, Sum Frequency Generation have demonstrated an ability to probe the surface composition of growing films even in the presence of aqueous solutions. 96 refs., 4 figs

  1. Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation

    Science.gov (United States)

    Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet

    2015-01-01

    When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating

  2. Hadronic final states and sum rules in deep inelastic processes

    International Nuclear Information System (INIS)

    Pal, B.K.

    1977-01-01

    In order to get maximum information on the hadronic final states and sum rules in deep inelastic processes, Regge phenomenology and quarks parton model have been used. The unified picture for the production of hadrons of type i as a function of Bjorken and Feyman variables with only one adjustable parameter is formulated. The results of neutrino experiments and the production of charm particles are discussed in sum rules. (author)

  3. A zero-sum monetary system, interest rates, and implications

    OpenAIRE

    Hanley, Brian P.

    2015-01-01

    To the knowledge of the author, this is the first time it has been shown that interest rates that are extremely high by modern standards (100% and higher) are necessary within a zero-sum monetary system, and not just driven by greed. Extreme interest rates that appeared in various places and times reinforce the idea that hard money may have contributed to high rates of interest. Here a model is presented that examines the interest rate required to succeed as an investor in a zero-sum fixed qu...

  4. 29 CFR Appendix A to Part 4022 - Lump Sum Mortality Rates

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Lump Sum Mortality Rates A Appendix A to Part 4022 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION COVERAGE AND BENEFITS BENEFITS PAYABLE IN TERMINATED SINGLE-EMPLOYER PLANS Pt. 4022, App. A Appendix A to Part 4022—Lump Sum Mortality...

  5. Elaboration of a concept for the cumulative environmental exposure assessment of biocides

    Energy Technology Data Exchange (ETDEWEB)

    Gross, Rita; Bunke, Dirk; Moch, Katja [Oeko-Institut e.V. - Institut fuer Angewandte Oekologie e.V., Freiburg im Breisgau (Germany); Gartiser, Stefan [Hydrotox GmbH, Freiburg im Breisgau (Germany)

    2011-12-15

    Article 10(1) of the EU Biocidal Products Directive 98/8/EC (BPD) requires that for the inclusion of an active substance in Annex I, Annex IA or IB, cumulation effects from the use of biocidal products containing the same active substance shall be taken into account, where relevant. The study proves the feasibility of a technical realisation of Article 10(1) of the BPD and elaborates a first concept for the cumulative environmental exposure assessment of biocides. Existing requirements concerning cumulative assessments in other regulatory frameworks have been evaluated and their applicability for biocides has been examined. Technical terms and definitions used in this context were documented with the aim to harmonise terminology with other frameworks and to set up a precise definition within the BPD. Furthermore, application conditions of biocidal products have been analysed to find out for which cumulative exposure assessments may be relevant. Different parameters were identified which might serve as indicators for the relevance of cumulative exposure assessments. These indicators were then integrated in a flow chart by means of which the relevance of cumulative exposure assessments can be checked. Finally, proposals for the technical performance of cumulative exposure assessments within the Review Programme have been elaborated with the aim to bring the results of the project into the upcoming development and harmonization processes on EU level. (orig.)

  6. A Novel Noncircular MUSIC Algorithm Based on the Concept of the Difference and Sum Coarray.

    Science.gov (United States)

    Chen, Zhenhong; Ding, Yingtao; Ren, Shiwei; Chen, Zhiming

    2018-01-25

    In this paper, we propose a vectorized noncircular MUSIC (VNCM) algorithm based on the concept of the coarray, which can construct the difference and sum (diff-sum) coarray, for direction finding of the noncircular (NC) quasi-stationary sources. Utilizing both the NC property and the concept of the Khatri-Rao product, the proposed method can be applied to not only the ULA but also sparse arrays. In addition, we utilize the quasi-stationary characteristic instead of the spatial smoothing method to solve the coherent issue generated by the Khatri-Rao product operation so that the available degree of freedom (DOF) of the constructed virtual array will not be reduced by half. Compared with the traditional NC virtual array obtained in the NC MUSIC method, the diff-sum coarray achieves a higher number of DOFs as it comprises both the difference set and the sum set. Due to the complementarity between the difference set and the sum set for the coprime array, we choose the coprime array with multiperiod subarrays (CAMpS) as the array model and summarize the properties of the corresponding diff-sum coarray. Furthermore, we develop a diff-sum coprime array with multiperiod subarrays (DsCAMpS) whose diff-sum coarray has a higher DOF. Simulation results validate the effectiveness of the proposed method and the high DOF of the diff-sum coarray.

  7. A Novel Noncircular MUSIC Algorithm Based on the Concept of the Difference and Sum Coarray

    Science.gov (United States)

    Chen, Zhenhong; Ding, Yingtao; Chen, Zhiming

    2018-01-01

    In this paper, we propose a vectorized noncircular MUSIC (VNCM) algorithm based on the concept of the coarray, which can construct the difference and sum (diff–sum) coarray, for direction finding of the noncircular (NC) quasi-stationary sources. Utilizing both the NC property and the concept of the Khatri–Rao product, the proposed method can be applied to not only the ULA but also sparse arrays. In addition, we utilize the quasi-stationary characteristic instead of the spatial smoothing method to solve the coherent issue generated by the Khatri–Rao product operation so that the available degree of freedom (DOF) of the constructed virtual array will not be reduced by half. Compared with the traditional NC virtual array obtained in the NC MUSIC method, the diff–sum coarray achieves a higher number of DOFs as it comprises both the difference set and the sum set. Due to the complementarity between the difference set and the sum set for the coprime array, we choose the coprime array with multiperiod subarrays (CAMpS) as the array model and summarize the properties of the corresponding diff–sum coarray. Furthermore, we develop a diff–sum coprime array with multiperiod subarrays (DsCAMpS) whose diff–sum coarray has a higher DOF. Simulation results validate the effectiveness of the proposed method and the high DOF of the diff–sum coarray. PMID:29370138

  8. A Novel Noncircular MUSIC Algorithm Based on the Concept of the Difference and Sum Coarray

    Directory of Open Access Journals (Sweden)

    Zhenhong Chen

    2018-01-01

    Full Text Available In this paper, we propose a vectorized noncircular MUSIC (VNCM algorithm based on the concept of the coarray, which can construct the difference and sum (diff–sum coarray, for direction finding of the noncircular (NC quasi-stationary sources. Utilizing both the NC property and the concept of the Khatri–Rao product, the proposed method can be applied to not only the ULA but also sparse arrays. In addition, we utilize the quasi-stationary characteristic instead of the spatial smoothing method to solve the coherent issue generated by the Khatri–Rao product operation so that the available degree of freedom (DOF of the constructed virtual array will not be reduced by half. Compared with the traditional NC virtual array obtained in the NC MUSIC method, the diff–sum coarray achieves a higher number of DOFs as it comprises both the difference set and the sum set. Due to the complementarity between the difference set and the sum set for the coprime array, we choose the coprime array with multiperiod subarrays (CAMpS as the array model and summarize the properties of the corresponding diff–sum coarray. Furthermore, we develop a diff–sum coprime array with multiperiod subarrays (DsCAMpS whose diff–sum coarray has a higher DOF. Simulation results validate the effectiveness of the proposed method and the high DOF of the diff–sum coarray.

  9. Cumulative carbon as a policy framework for achieving climate stabilization

    Science.gov (United States)

    Matthews, H. Damon; Solomon, Susan; Pierrehumbert, Raymond

    2012-01-01

    The primary objective of the United Nations Framework Convention on Climate Change is to stabilize greenhouse gas concentrations at a level that will avoid dangerous climate impacts. However, greenhouse gas concentration stabilization is an awkward framework within which to assess dangerous climate change on account of the significant lag between a given concentration level and the eventual equilibrium temperature change. By contrast, recent research has shown that global temperature change can be well described by a given cumulative carbon emissions budget. Here, we propose that cumulative carbon emissions represent an alternative framework that is applicable both as a tool for climate mitigation as well as for the assessment of potential climate impacts. We show first that both atmospheric CO2 concentration at a given year and the associated temperature change are generally associated with a unique cumulative carbon emissions budget that is largely independent of the emissions scenario. The rate of global temperature change can therefore be related to first order to the rate of increase of cumulative carbon emissions. However, transient warming over the next century will also be strongly affected by emissions of shorter lived forcing agents such as aerosols and methane. Non-CO2 emissions therefore contribute to uncertainty in the cumulative carbon budget associated with near-term temperature targets, and may suggest the need for a mitigation approach that considers separately short- and long-lived gas emissions. By contrast, long-term temperature change remains primarily associated with total cumulative carbon emissions owing to the much longer atmospheric residence time of CO2 relative to other major climate forcing agents. PMID:22869803

  10. Cumulative keyboard strokes: a possible risk factor for carpal tunnel syndrome

    Directory of Open Access Journals (Sweden)

    Eleftheriou Andreas

    2012-08-01

    Full Text Available Abstract Background Contradictory reports have been published regarding the association of Carpal Tunnel Syndrome (CTS and the use of computer keyboard. Previous studies did not take into account the cumulative exposure to keyboard strokes among computer workers. The aim of the present study was to investigate the association between cumulative keyboard use (keyboard strokes and CTS. Methods Employees (461 from a Governmental data entry & processing unit agreed to participate (response rate: 84.1 % in a cross-sectional study. Α questionnaire was distributed to the participants to obtain information on socio-demographics and risk factors for CTS. The participants were examined for signs and symptoms related to CTS and were asked if they had previous history or surgery for CTS. The cumulative amount of the keyboard strokes per worker per year was calculated by the use of payroll’s registry. Two case definitions for CTS were used. The first included subjects with personal history/surgery for CTS while the second included subjects that belonged to the first case definition plus those participants were identified through clinical examination. Results Multivariate analysis used for both case definitions, indicated that those employees with high cumulative exposure to keyboard strokes were at increased risk of CTS (case definition A: OR = 2.23;95 % CI = 1.09-4.52 and case definition B: OR = 2.41; 95%CI = 1.36-4.25. A dose response pattern between cumulative exposure to keyboard strokes and CTS has been revealed (p  Conclusions The present study indicated a possible association between cumulative exposure to keyboard strokes and development of CTS. Cumulative exposure to key-board strokes would be taken into account as an exposure indicator regarding exposure assessment of computer workers. Further research is needed in order to test the results of the current study and assess causality between cumulative keyboard strokes and

  11. Derivation of sum rules for quark and baryon fields. [light-like charges

    Energy Technology Data Exchange (ETDEWEB)

    Bongardt, K [Karlsruhe Univ. (TH) (Germany, F.R.). Inst. fuer Theoretische Kernphysik

    1978-08-21

    In an analogous way to the Weinberg sum rules, two spectral-function sum rules for quark and baryon fields are derived by means of the concept of lightlike charges. The baryon sum rules are valid for the case of SU/sub 3/ as well as for SU/sub 4/ and the one-particle approximation yields a linear mass relation. This relation is not in disagreement with the normal linear GMO formula for the baryons. The calculated masses of the first resonance states agree very well with the experimental data.

  12. A Global Optimization Algorithm for Sum of Linear Ratios Problem

    OpenAIRE

    Yuelin Gao; Siqiao Jin

    2013-01-01

    We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the c...

  13. Sum rules for four-spinon dynamic structure factor in XXX model

    International Nuclear Information System (INIS)

    Si Lakhal, B.; Abada, A.

    2005-01-01

    In the context of the antiferromagnetic spin 12 Heisenberg quantum spin chain (XXX model), we estimate the contribution of the exact four-spinon dynamic structure factor S 4 by calculating a number of sum rules the total dynamic structure factor S is known to satisfy exactly. These sum rules are: the static susceptibility, the integrated intensity, the total integrated intensity, the first frequency moment and the nearest-neighbor correlation function. We find that the contribution of S 4 is between 1% and 2.5%, depending on the sum rule, whereas the contribution of the exact two-spinon dynamic structure factor S 2 is between 70% and 75%. The calculations are numerical and Monte Carlo based. Good statistics are obtained

  14. Cumulative effective dose associated with radiography and CT of adolescents with spinal injuries.

    Science.gov (United States)

    Lemburg, Stefan P; Peters, Soeren A; Roggenland, Daniela; Nicolas, Volkmar; Heyer, Christoph M

    2010-12-01

    The purpose of this study was to analyze the quantity and distribution of cumulative effective doses in diagnostic imaging of adolescents with spinal injuries. At a level 1 trauma center from July 2003 through June 2009, imaging procedures during initial evaluation and hospitalization and after discharge of all patients 10-20 years old with spinal fractures were retrospectively analyzed. The cumulative effective doses for all imaging studies were calculated, and the doses to patients with spinal injuries who had multiple traumatic injuries were compared with the doses to patients with spinal injuries but without multiple injuries. The significance level was set at 5%. Imaging studies of 72 patients (32 with multiple injuries; average age, 17.5 years) entailed a median cumulative effective dose of 18.89 mSv. Patients with multiple injuries had a significantly higher total cumulative effective dose (29.70 versus 10.86 mSv, p cumulative effective dose to multiple injury patients during the initial evaluation (18.39 versus 2.83 mSv, p cumulative effective dose. Adolescents with spinal injuries receive a cumulative effective dose equal to that of adult trauma patients and nearly three times that of pediatric trauma patients. Areas of focus in lowering cumulative effective dose should be appropriate initial estimation of trauma severity and careful selection of CT scan parameters.

  15. Evolution of costly explicit memory and cumulative culture.

    Science.gov (United States)

    Nakamaru, Mayuko

    2016-06-21

    Humans can acquire new information and modify it (cumulative culture) based on their learning and memory abilities, especially explicit memory, through the processes of encoding, consolidation, storage, and retrieval. Explicit memory is categorized into semantic and episodic memories. Animals have semantic memory, while episodic memory is unique to humans and essential for innovation and the evolution of culture. As both episodic and semantic memory are needed for innovation, the evolution of explicit memory influences the evolution of culture. However, previous theoretical studies have shown that environmental fluctuations influence the evolution of imitation (social learning) and innovation (individual learning) and assume that memory is not an evolutionary trait. If individuals can store and retrieve acquired information properly, they can modify it and innovate new information. Therefore, being able to store and retrieve information is essential from the perspective of cultural evolution. However, if both storage and retrieval were too costly, forgetting and relearning would have an advantage over storing and retrieving acquired information. In this study, using mathematical analysis and individual-based simulations, we investigate whether cumulative culture can promote the coevolution of costly memory and social and individual learning, assuming that cumulative culture improves the fitness of each individual. The conclusions are: (1) without cumulative culture, a social learning cost is essential for the evolution of storage-retrieval. Costly storage-retrieval can evolve with individual learning but costly social learning does not evolve. When low-cost social learning evolves, the repetition of forgetting and learning is favored more than the evolution of costly storage-retrieval, even though a cultural trait improves the fitness. (2) When cumulative culture exists and improves fitness, storage-retrieval can evolve with social and/or individual learning, which

  16. Incorporating X–ray summing into gamma–gamma signature quantification

    International Nuclear Information System (INIS)

    Britton, R.; Jackson, M.J.; Davies, A.V.

    2016-01-01

    A method for quantifying coincidence signatures has been extended to incorporate the effects of X–ray summing, and tested using a high–efficiency γ–γ system. An X–ray library has been created, allowing all possible γ, X–ray and conversion electron cascades to be generated. The equations for calculating efficiency and cascade summing corrected coincidence signature probabilities have also been extended from a two γ, two detector ‘special case’ to an arbitrarily large system. The coincidence library generated is fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and the half–life of the cascade, allowing the user to quickly identify coincidence signatures of interest. The method and software described is inherently flexible, as it only requires evaluated nuclear data, an X–ray library, and accurate efficiency characterisations to quickly and easily calculate coincidence signature probabilities for a variety of systems. Additional uses for the software include the fast identification of γ coincidence signals with required multiplicities and branching ratios, identification of the optimal coincidence signatures to measure for a particular system, and the calculation of cascade summing corrections for single detector systems. - Highlights: • Method for incorporating X-ray summing into coincidence measurements developed. • Calculation routines have been extended to an arbitrarily large detector system, and re-written to take advantage of multiple computing cores. • Data collected in list-mode with all events timestamped for offline coincidence analysis. • Coincidence analysis of environmental samples will dramatically improve the detection sensitivity achievable.

  17. CUMULATE ROCKS ASSOCIATED WITH CARBONATE ASSIMILATION, HORTAVÆR COMPLEX, NORTH-CENTRAL NORWAY

    Science.gov (United States)

    Barnes, C. G.; Prestvik, T.; Li, Y.

    2009-12-01

    The Hortavær igneous complex intruded high-grade metamorphic rocks of the Caledonian Helgeland Nappe Complex at ca. 466 Ma. The complex is an unusual mafic-silicic layered intrusion (MASLI) because the principal felsic rock type is syenite and because the syenite formed in situ rather than by deep-seated partial melting of crustal rocks. Magma differentiation in the complex was by assimilation, primarily of calc-silicate rocks and melts with contributions from marble and semi-pelites, plus fractional crystallization. The effect of assimilation of calcite-rich rocks was to enhance stability of fassaitic clinopyroxene at the expense of olivine, which resulted in alkali-rich residual melts and lowering of silica activity. This combination of MASLI-style emplacement and carbonate assimilation produced three types of cumulate rocks: (1) Syenitic cumulates formed by liquid-crystal separation. As sheets of mafic magma were loaded on crystal-rich syenitic magma, residual liquid was expelled, penetrating the overlying mafic sheets in flame structures, and leaving a cumulate syenite. (2) Reaction cumulates. Carbonate assimilation, illustrated by a simple assimilation reaction: olivine + calcite + melt = clinopyroxene + CO2 resulted in cpx-rich cumulates such as clinopyroxenite, gabbro, and mela-monzodiorite, many of which contain igneous calcite. (3) Magmatic skarns. Calc-silicate host rocks underwent partial melting during assimilation, yielding a Ca-rich melt as the principal assimilated material and permitting extensive reaction with surrounding magma to form Kspar + cpx + garnet-rich ‘cumulate’ rocks. Cumulate types (2) and (3) do not reflect traditional views of cumulate rocks but instead result from a series of melt-present discontinuous (peritectic) reactions and partial melting of calc-silicate xenoliths. In the Hortavær complex, such cumulates are evident because of the distinctive peritectic cumulate assemblages. It is unclear whether assimilation of

  18. Simulations of charge summing and threshold dispersion effects in Medipix3

    International Nuclear Information System (INIS)

    Pennicard, D.; Ballabriga, R.; Llopart, X.; Campbell, M.; Graafsma, H.

    2011-01-01

    A novel feature of the Medipix3 photon-counting pixel readout chip is inter-pixel communication. By summing together the signals from neighbouring pixels at a series of 'summing nodes', and assigning each hit to the node with the highest signal, the chip can compensate for charge-sharing effects. However, previous experimental tests have demonstrated that the node-to-node variation in the detector's response is very large. Using computer simulations, it is shown that this variation is due to threshold dispersion, which results in many hits being assigned to whichever summing node in the vicinity has the lowest threshold level. A reduction in threshold variation would attenuate but not solve this issue. A new charge summing and hit assignment process is proposed, where the signals in individual pixels are used to determine the hit location, and then signals from neighbouring pixels are summed to determine whether the total photon energy is above threshold. In simulation, this new mode accurately assigns each hit to the pixel with the highest pulse height without any losses or double counting. - Research highlights: → Medipix3 readout chip compensates charge sharing using inter-pixel communication. → In initial production run, the flat-field response is unexpectedly nonuniform. → This effect is reproduced in simulation, and is caused by threshold dispersion. → A new inter-pixel communication process is proposed. → Simulations demonstrate the new process should give much better uniformity.

  19. EPA Workshop on Epigenetics and Cumulative Risk ...

    Science.gov (United States)

    Agenda Download the Workshop Agenda (PDF) The workshop included presentations and discussions by scientific experts pertaining to three topics (i.e., epigenetic changes associated with diverse stressors, key science considerations in understanding epigenetic changes, and practical application of epigenetic tools to address cumulative risks from environmental stressors), to address several questions under each topic, and included an opportunity for attendees to participate in break-out groups, provide comments and ask questions. Workshop Goals The workshop seeks to examine the opportunity for use of aggregate epigenetic change as an indicator in cumulative risk assessment for populations exposed to multiple stressors that affect epigenetic status. Epigenetic changes are specific molecular changes around DNA that alter expression of genes. Epigenetic changes include DNA methylation, formation of histone adducts, and changes in micro RNAs. Research today indicates that epigenetic changes are involved in many chronic diseases (cancer, cardiovascular disease, obesity, diabetes, mental health disorders, and asthma). Research has also linked a wide range of stressors including pollution and social factors with occurrence of epigenetic alterations. Epigenetic changes have the potential to reflect impacts of risk factors across multiple stages of life. Only recently receiving attention is the nexus between the factors of cumulative exposure to environmental

  20. Hyperscaling breakdown and Ising spin glasses: The Binder cumulant

    Science.gov (United States)

    Lundow, P. H.; Campbell, I. A.

    2018-02-01

    Among the Renormalization Group Theory scaling rules relating critical exponents, there are hyperscaling rules involving the dimension of the system. It is well known that in Ising models hyperscaling breaks down above the upper critical dimension. It was shown by Schwartz (1991) that the standard Josephson hyperscaling rule can also break down in Ising systems with quenched random interactions. A related Renormalization Group Theory hyperscaling rule links the critical exponents for the normalized Binder cumulant and the correlation length in the thermodynamic limit. An appropriate scaling approach for analyzing measurements from criticality to infinite temperature is first outlined. Numerical data on the scaling of the normalized correlation length and the normalized Binder cumulant are shown for the canonical Ising ferromagnet model in dimension three where hyperscaling holds, for the Ising ferromagnet in dimension five (so above the upper critical dimension) where hyperscaling breaks down, and then for Ising spin glass models in dimension three where the quenched interactions are random. For the Ising spin glasses there is a breakdown of the normalized Binder cumulant hyperscaling relation in the thermodynamic limit regime, with a return to size independent Binder cumulant values in the finite-size scaling regime around the critical region.

  1. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  2. Semi-direct sums of Lie algebras and continuous integrable couplings

    International Nuclear Information System (INIS)

    Ma Wenxiu; Xu Xixiang; Zhang Yufeng

    2006-01-01

    A relation between semi-direct sums of Lie algebras and integrable couplings of continuous soliton equations is presented, and correspondingly, a feasible way to construct integrable couplings is furnished. A direct application to the AKNS spectral problem leads to a novel hierarchy of integrable couplings of the AKNS hierarchy of soliton equations. It is also indicated that the study of integrable couplings using semi-direct sums of Lie algebras is an important step towards complete classification of integrable systems

  3. Summary report of a workshop on establishing cumulative effects thresholds : a suggested approach for establishing cumulative effects thresholds in a Yukon context

    International Nuclear Information System (INIS)

    2003-01-01

    Increasingly, thresholds are being used as a land and cumulative effects assessment and management tool. To assist in the management of wildlife species such as woodland caribou, the Department of Indian and Northern Affairs (DIAND) Environment Directorate, Yukon sponsored a workshop to develop and use cumulative thresholds in the Yukon. The approximately 30 participants reviewed recent initiatives in the Yukon and other jurisdictions. The workshop is expected to help formulate a strategic vision for implementing cumulative effects thresholds in the Yukon. The key to success resides in building relationships with Umbrella Final Agreement (UFA) Boards, the Development Assessment Process (DAP), and the Yukon Environmental and Socio-Economic Assessment Act (YESAA). Broad support is required within an integrated resource management framework. The workshop featured discussions on current science and theory of cumulative effects thresholds. Potential data and implementation issues were also discussed. It was concluded that thresholds are useful and scientifically defensible. The threshold research results obtained in Alberta, British Columbia and the Northwest Territories are applicable to the Yukon. One of the best tools for establishing and tracking thresholds is habitat effectiveness. Effects must be monitored and tracked. Biologists must share their information with decision makers. Interagency coordination and assistance should be facilitated through the establishment of working groups. Regional land use plans should include thresholds. 7 refs.

  4. Generalizations of some zero sum theorems

    Indian Academy of Sciences (India)

    Let G be an abelian group of order n, written additively. The Davenport constant D(G) is defined to be the smallest natural number t such that any sequence of length t over G has a non-empty subsequence whose sum is zero. Another combinatorial invariant E(G). (known as the EGZ constant) is the smallest natural number t ...

  5. A practical comparison of methods to assess sum-of-products

    International Nuclear Information System (INIS)

    Rauzy, A.; Chatelet, E.; Dutuit, Y.; Berenguer, C.

    2003-01-01

    Many methods have been proposed in the literature to assess the probability of a sum-of-products. This problem has been shown computationally hard (namely no. P-hard). Therefore, algorithms can be compared only from a practical point of view. In this article, we propose first an efficient implementation of the pivotal decomposition method. This kind of algorithms is widely used in the Artificial Intelligence framework. It is unfortunately almost never considered in the reliability engineering framework, but as a pedagogical tool. We report experimental results that show that this method is in general much more efficient than classical methods that rewrite the sum-of-products under study into an equivalent sum of disjoint products. Then, we derive from our method a factorization algorithm to be used as a preprocessing method for binary decision diagrams. We show by means of experimental results that this latter approach outperforms the formers

  6. Dynamical local field, compressibility, and frequency sum rules for quasiparticles

    International Nuclear Information System (INIS)

    Morawetz, Klaus

    2002-01-01

    The finite temperature dynamical response function including the dynamical local field is derived within a quasiparticle picture for interacting one-, two-, and three-dimensional Fermi systems. The correlations are assumed to be given by a density-dependent effective mass, quasiparticle energy shift, and relaxation time. The latter one describes disorder or collisional effects. This parametrization of correlations includes local-density functionals as a special case and is therefore applicable for density-functional theories. With a single static local field, the third-order frequency sum rule can be fulfilled simultaneously with the compressibility sum rule by relating the effective mass and quasiparticle energy shift to the structure function or pair-correlation function. Consequently, solely local-density functionals without taking into account effective masses cannot fulfill both sum rules simultaneously with a static local field. The comparison to the Monte Carlo data seems to support such a quasiparticle picture

  7. Demonstration of a Quantum Nondemolition Sum Gate

    DEFF Research Database (Denmark)

    Yoshikawa, J.; Miwa, Y.; Huck, Alexander

    2008-01-01

    The sum gate is the canonical two-mode gate for universal quantum computation based on continuous quantum variables. It represents the natural analogue to a qubit C-NOT gate. In addition, the continuous-variable gate describes a quantum nondemolition (QND) interaction between the quadrature...

  8. A fast summation method for oscillatory lattice sums

    Science.gov (United States)

    Denlinger, Ryan; Gimbutas, Zydrunas; Greengard, Leslie; Rokhlin, Vladimir

    2017-02-01

    We present a fast summation method for lattice sums of the type which arise when solving wave scattering problems with periodic boundary conditions. While there are a variety of effective algorithms in the literature for such calculations, the approach presented here is new and leads to a rigorous analysis of Wood's anomalies. These arise when illuminating a grating at specific combinations of the angle of incidence and the frequency of the wave, for which the lattice sums diverge. They were discovered by Wood in 1902 as singularities in the spectral response. The primary tools in our approach are the Euler-Maclaurin formula and a steepest descent argument. The resulting algorithm has super-algebraic convergence and requires only milliseconds of CPU time.

  9. Aumann and Serrano’s economic index of risk for sums of gambles

    Directory of Open Access Journals (Sweden)

    Minqiang Li

    2014-12-01

    Full Text Available We study Aumann and Serrano’s (2008 risk index for sums of gambles that are not dependent. If the dependent parts are similarly ordered, then the risk index of the sum is always larger than the minimum of the risk indices of the two gambles. For negative dependence, the risk index of the sum is always smaller than the maximum. The above results agree with our intuitions of risk diversification well. These result points out another attractive property of Aumann and Serrano’s risk index. These properties are potentially useful for risk assessment purposes of financial securities.

  10. 40 CFR 35.910-5 - Additional allotments of previously withheld sums.

    Science.gov (United States)

    2010-07-01

    ...-Clean Water Act § 35.910-5 Additional allotments of previously withheld sums. (a) A total sum of $9... Kansas 53,794,200 Kentucky 90,430,800 Louisiana 71,712,250 Maine 78,495,200 Maryland 297,705,300... Montana 12,378,200 Nebraska 38,539,500 Nevada 31,839,800 New Hampshire 77,199,350 New Jersey 660,830,500...

  11. On the divergence of triangular and eccentric spherical sums of double Fourier series

    Energy Technology Data Exchange (ETDEWEB)

    Karagulyan, G A [Institute of Mathematics, National Academy of Sciences of Armenia, Yerevan (Armenia)

    2016-01-31

    We construct a continuous function on the torus with almost everywhere divergent triangular sums of double Fourier series. We also prove an analogous theorem for eccentric spherical sums. Bibliography: 14 titles.

  12. On the divergence of triangular and eccentric spherical sums of double Fourier series

    International Nuclear Information System (INIS)

    Karagulyan, G A

    2016-01-01

    We construct a continuous function on the torus with almost everywhere divergent triangular sums of double Fourier series. We also prove an analogous theorem for eccentric spherical sums. Bibliography: 14 titles

  13. Continuum contributions to dipole oscillator-strength sum rules for hydrogen in finite basis sets

    DEFF Research Database (Denmark)

    Oddershede, Jens; Ogilvie, John F.; Sauer, Stephan P. A.

    2017-01-01

    Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases, but that the conver......Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases......, but that the convergence towards the final results with increasing excitation energies included in the sum over states is slower in the basis-set cases when we use the best basis. We argue also that this conclusion most likely holds also for larger atoms or molecules....

  14. Mismatch or cumulative stress : Toward an integrated hypothesis of programming effects

    NARCIS (Netherlands)

    Nederhof, Esther; Schmidt, Mathias V.

    2012-01-01

    This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the

  15. QCD and power corrections to sum rules in deep-inelastic lepton-nucleon scattering

    International Nuclear Information System (INIS)

    Ravindran, V.; Neerven, W.L. van

    2001-01-01

    In this paper we study QCD and power corrections to sum rules which show up in deep-inelastic lepton-hadron scattering. Furthermore we will make a distinction between fundamental sum rules which can be derived from quantum field theory and those which are of a phenomenological origin. Using current algebra techniques the fundamental sum rules can be expressed into expectation values of (partially) conserved (axial-)vector currents sandwiched between hadronic states. These expectation values yield the quantum numbers of the corresponding hadron which are determined by the underlying flavour group SU(n) F . In this case one can show that there exist an intimate relation between the appearance of power and QCD corrections. The above features do not hold for phenomenological sum rules, hereafter called non-fundamental. They have no foundation in quantum field theory and they mostly depend on certain assumptions made for the structure functions like super-convergence relations or the parton model. Therefore only the fundamental sum rules provide us with a stringent test of QCD

  16. Finite-volume cumulant expansion in QCD-colorless plasma

    Energy Technology Data Exchange (ETDEWEB)

    Ladrem, M. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Physics Department, Algiers (Algeria); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ahmed, M.A.A. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Taiz University in Turba, Physics Department, Taiz (Yemen); Alfull, Z.Z. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Cherif, S. [ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ghardaia University, Sciences and Technologies Department, Ghardaia (Algeria)

    2015-09-15

    Due to the finite-size effects, the localization of the phase transition in finite systems and the determination of its order, become an extremely difficult task, even in the simplest known cases. In order to identify and locate the finite-volume transition point T{sub 0}(V) of the QCD deconfinement phase transition to a colorless QGP, we have developed a new approach using the finite-size cumulant expansion of the order parameter and the L{sub mn}-method. The first six cumulants C{sub 1,2,3,4,5,6} with the corresponding under-normalized ratios (skewness Σ, kurtosis κ, pentosis Π{sub ±}, and hexosis H{sub 1,2,3}) and three unnormalized combinations of them, (O = σ{sup 2}κΣ{sup -1},U = σ{sup -2}Σ{sup -1},N = σ{sup 2}κ) are calculated and studied as functions of (T, V). A new approach, unifying in a clear and consistent way the definitions of cumulant ratios, is proposed.Anumerical FSS analysis of the obtained results has allowed us to locate accurately the finite-volume transition point. The extracted transition temperature value T{sub 0}(V) agrees with that expected T{sub 0}{sup N}(V) from the order parameter and the thermal susceptibility χ{sub T} (T, V), according to the standard procedure of localization to within about 2%. In addition to this, a very good correlation factor is obtained proving the validity of our cumulants method. The agreement of our results with those obtained by means of other models is remarkable. (orig.)

  17. Second-moment sum rules for correlation functions in a classical ionic mixture

    NARCIS (Netherlands)

    Suttorp, L.G.; Ebeling, W.

    1992-01-01

    The complete set of second-moment sum rules for the correlation functions of arbitrarily high order describing a classical multi-component ionic mixture in equilibrium is derived from the grand-canonical ensemble. The connection of these sum rules with the large-scale behaviour of fluctuations in an

  18. Fibonacci Identities via the Determinant Sum Property

    Science.gov (United States)

    Spivey, Michael

    2006-01-01

    We use the sum property for determinants of matrices to give a three-stage proof of an identity involving Fibonacci numbers. Cassini's and d'Ocagne's Fibonacci identities are obtained at the ends of stages one and two, respectively. Catalan's Fibonacci identity is also a special case.

  19. Computation and theory of Euler sums of generalized hyperharmonic numbers

    OpenAIRE

    Xu, Ce

    2017-01-01

    Recently, Dil and Boyadzhiev \\cite{AD2015} proved an explicit formula for the sum of multiple harmonic numbers whose indices are the sequence $\\left( {{{\\left\\{ 0 \\right\\}}_r},1} \\right)$. In this paper we show that the sums of multiple harmonic numbers whose indices are the sequence $\\left( {{{\\left\\{ 0 \\right\\}}_r,1};{{\\left\\{ 1 \\right\\}}_{k-1}}} \\right)$ can be expressed in terms of (multiple) zeta values, multiple harmonic numbers and Stirling numbers of the first kind, and give an explic...

  20. Cumulative emission budgets and their implications: the case for SAFE carbon

    Science.gov (United States)

    Allen, Myles; Bowerman, Niel; Frame, David; Mason, Charles

    2010-05-01

    The risk of dangerous long-term climate change due to anthropogenic carbon dioxide emissions is predominantly determined by cumulative emissions over all time, not the rate of emission in any given year or commitment period. This has profound implications for climate mitigation policy: emission targets for specific years such as 2020 or 2050 provide no guarantee of meeting any overall cumulative emission budget. By focusing attention on short-term measures to reduce the flow of emissions, they may even exacerbate the overall long-term stock. Here we consider how climate policies might be designed explicitly to limit cumulative emissions to, for example, one trillion tonnes of carbon, a figure that has been estimated to give a most likely warming of two degrees above pre-industrial, with a likely range of 1.6-2.6 degrees. Three approaches are considered: tradable emission permits with the possibility of indefinite emission banking, carbon taxes explicitly linked to cumulative emissions and mandatory carbon sequestration. Framing mitigation policy around cumulative targets alleviates the apparent tension between climate protection and short-term consumption that bedevils any attempt to forge global agreement. We argue that the simplest and hence potentially the most effective approach might be a mandatory requirement on the fossil fuel industry to ensure that a steadily increasing fraction of fossil carbon extracted from the ground is artificially removed from the active carbon cycle through some form of sequestration. We define Sequestered Adequate Fraction of Extracted (SAFE) carbon as a source in which this sequestered fraction is anchored to cumulative emissions, increasing smoothly to reach 100% before we release the trillionth tonne. While adopting the use of SAFE carbon would increase the cost of fossil energy much as a system of emission permits or carbon taxes would, it could do so with much less explicit government intervention. We contrast this proposal

  1. Fast Inference with Min-Sum Matrix Product.

    Science.gov (United States)

    Felzenszwalb, Pedro F; McAuley, Julian J

    2011-12-01

    The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.

  2. Complexity and demographic explanations of cumulative culture.

    Science.gov (United States)

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  3. Sharing a quota on cumulative carbon emissions

    International Nuclear Information System (INIS)

    Raupach, Michael R.; Davis, Steven J.; Peters, Glen P.; Andrew, Robbie M.; Canadell, Josep G.; Ciais, Philippe

    2014-01-01

    Any limit on future global warming is associated with a quota on cumulative global CO 2 emissions. We translate this global carbon quota to regional and national scales, on a spectrum of sharing principles that extends from continuation of the present distribution of emissions to an equal per-capita distribution of cumulative emissions. A blend of these endpoints emerges as the most viable option. For a carbon quota consistent with a 2 C warming limit (relative to pre-industrial levels), the necessary long-term mitigation rates are very challenging (typically over 5% per year), both because of strong limits on future emissions from the global carbon quota and also the likely short-term persistence in emissions growth in many regions. (authors)

  4. Childhood Cumulative Risk and Later Allostatic Load

    DEFF Research Database (Denmark)

    Doan, Stacey N; Dich, Nadya; Evans, Gary W

    2014-01-01

    State, followed for 8 years (between the ages 9 and 17). Poverty- related stress was computed using the cumulative risk approach, assessing stressors across 9 domains, including environmental, psychosocial, and demographic factors. Allostatic load captured a range of physiological responses, including......Objective: The present study investigated the long-term impact of exposure to poverty-related stressors during childhood on allostatic load, an index of physiological dysregulation, and the potential mediating role of substance use. Method: Participants (n = 162) were rural children from New York...... cardiovascular, hypothalamic pituitary adrenal axis, sympathetic adrenal medullary system, and metabolic activity. Smoking and alcohol/drug use were tested as mediators of the hypothesized childhood risk-adolescent allostatic load relationship. Results: Cumulative risk exposure at age 9 predicted increases...

  5. Cumulative effect of reproductive factors on ideal cardiovascular health in postmenopausal women: a cross-sectional study in central south China.

    Science.gov (United States)

    Cao, Xia; Zhou, Jiansong; Yuan, Hong; Chen, Zhiheng

    2015-12-21

    The American Heart Association developed the Life's Simple 7 metric for defining cardiovascular health. Little is known, however, whether co-occurring reproductive factors, which affects endogenous oestrogen levels during a woman's life, also influences ideal cardiovascular health in postmenopausal women. Using data on a cross-sectional study with a convenience sample of 1,625 postmenopausal women (median age, 60.0 years) in a medical health checkup program at a general hospital in central south China 2013-2014, we examined the association between cumulative reproductive risk and ideal cardiovascular health in postmenopausal women. A cumulative risk score (range 0 to 4) was created by summing four reproductive risk factors (age at menarche, age at menopause, number of children, and pregnancy losses) present in each individual from binary variables in which 0 stands for favorable and 1 for less-than-favorable level. Ideal levels for each component in Life's Simple 7 (blood pressure, cholesterol, glucose, BMI, smoking, physical activity, and diet) were used to create an ideal Life's Simple 7 score [0-1 (low), 2, 3, 4, 5 and 6-7 (high)]. Participants with earlier age at menarche (odds ratio [OR] =0.42 [95 % CI 0.26-0.48]), earlier age at menopause [0.46 (0.32-0.58)], who have more than three children (0.42 [0.38-0.56]) and have history of pregnancy losses [0.76 (0.66-0.92)] were more likely to attain low (0-1) ideal Life's Simple 7 after adjustment for age. Participants were more likely to attain low (0-1) ideal Life's Simple 7 as exposure to the number of reproductive risk factors increased [OR (95 % CI) of 0.52 (0.42-0.66), 0.22 (0.16-0.26), and 0.16 (0.12-0.22) for cumulative reproductive risk scores of 1, 2, and 3 or 4, respectively, each versus 0]. The postmenopausal Chinese women with an increasing number of reproductive risk factors were progressively less likely to attain ideal levels of cardiovascular health factors.

  6. Mapping cumulative environmental risks: examples from the EU NoMiracle project

    NARCIS (Netherlands)

    Pistocchi, A.; Groenwold, J.; Lahr, J.; Loos, M.; Mujica, M.; Ragas, A.M.J.; Rallo, R.; Sala, S.; Schlink, U.; Strebel, K.; Vighi, M.; Vizcaino, P.

    2011-01-01

    We present examples of cumulative chemical risk mapping methods developed within the NoMiracle project. The different examples illustrate the application of the concentration addition (CA) approach to pesticides at different scale, the integration in space of cumulative risks to individual organisms

  7. Mapping Cumulative Impacts of Human Activities on Marine Ecosystems

    OpenAIRE

    , Seaplan

    2018-01-01

    Given the diversity of human uses and natural resources that converge in coastal waters, the potential independent and cumulative impacts of those uses on marine ecosystems are important to consider during ocean planning. This study was designed to support the development and implementation of the 2009 Massachusetts Ocean Management Plan. Its goal was to estimate and visualize the cumulative impacts of human activities on coastal and marine ecosystems in the state and federal waters off of Ma...

  8. Estimating a population cumulative incidence under calendar time trends

    DEFF Research Database (Denmark)

    Hansen, Stefan N; Overgaard, Morten; Andersen, Per K

    2017-01-01

    BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date...... by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis. RESULTS: We show how a proportional hazards model may be used to extrapolate disease risk...... estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata...

  9. The Relationship between Gender, Cumulative Adversities and ...

    African Journals Online (AJOL)

    The Relationship between Gender, Cumulative Adversities and Mental Health of Employees in ... CAs were measured in three forms (family adversities (CAFam), personal adversities ... Age of employees ranged between 18-65 years.

  10. A linear programming approach to max-sum problem: a review.

    Science.gov (United States)

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  11. The Sum of the Parts

    DEFF Research Database (Denmark)

    Gross, Fridolin; Green, Sara

    2017-01-01

    Systems biologists often distance themselves from reductionist approaches and formulate their aim as understanding living systems “as a whole”. Yet, it is often unclear what kind of reductionism they have in mind, and in what sense their methodologies offer a more comprehensive approach. To addre......-up”. Specifically, we point out that system-level properties constrain lower-scale processes. Thus, large-scale modeling reveals how living systems at the ​same time ​ are ​more and ​less than the sum of the parts....

  12. Simulation approach to coincidence summing in {gamma}-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Dziri, S., E-mail: samir.dziri@iphc.cnrs.fr [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France); Nourreddine, A.; Sellam, A.; Pape, A.; Baussan, E. [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France)

    2012-07-15

    Some of the radionuclides used for efficiency calibration of a HPGe spectrometer are subject to coincidence-summing (CS) and account must be taken of the phenomenon to obtain quantitative results when counting samples to determine their activity. We have used MCNPX simulations, which do not take CS into account, to obtain {gamma}-ray peak intensities that were compared to those observed experimentally. The loss or gain of a measured peak intensity relative to the simulated peak is attributed to CS. CS correction factors are compared with those of ETNA and GESPECOR. Application to a test sample prepared with known radionuclides gave values close to the published activities. - Highlights: Black-Right-Pointing-Pointer Coincidence summing occurs when the solid angle is increased. Black-Right-Pointing-Pointer The loss of counts gives rise to an approximative efficiency curves, this means a wrong quantitative data. Black-Right-Pointing-Pointer To overcome this problem we need mono-energetic source, otherwise, the MCNPX simulation allows by comparison with the experiment data to get the coincidence summing correction factors. Black-Right-Pointing-Pointer By multiplying these factors by the approximative efficiency, we obtain the accurate efficiency.

  13. Cumulation of light nuclei

    International Nuclear Information System (INIS)

    Baldin, A.M.; Bondarev, V.K.; Golovanov, L.B.

    1977-01-01

    Limit fragmentation of light nuclei (deuterium, helium) bombarded with 8,6 GeV/c protons was investigated. Fragments (pions, protons and deuterons) were detected within the emission angle 50-150 deg with regard to primary protons and within the pulse range 150-180 MeV/c. By the kinematics of collision of a primary proton with a target at rest the fragments observed correspond to a target mass upto 3 GeV. Thus, the data obtained correspond to teh cumulation upto the third order

  14. Cumulative Mass and NIOSH Variable Lifting Index Method for Risk Assessment: Possible Relations.

    Science.gov (United States)

    Stucchi, Giulia; Battevi, Natale; Pandolfi, Monica; Galinotti, Luca; Iodice, Simona; Favero, Chiara

    2018-02-01

    Objective The aim of this study was to explore whether the Variable Lifting Index (VLI) can be corrected for cumulative mass and thus test its efficacy in predicting the risk of low-back pain (LBP). Background A validation study of the VLI method was published in this journal reporting promising results. Although several studies highlighted a positive correlation between cumulative load and LBP, cumulative mass has never been considered in any of the studies investigating the relationship between manual material handling and LBP. Method Both VLI and cumulative mass were calculated for 2,374 exposed subjects using a systematic approach. Due to high variability of cumulative mass values, a stratification within VLI categories was employed. Dummy variables (1-4) were assigned to each class and used as a multiplier factor for the VLI, resulting in a new index (VLI_CMM). Data on LBP were collected by occupational physicians at the study sites. Logistic regression was used to estimate the risk of acute LBP within levels of risk exposure when compared with a control group formed by 1,028 unexposed subjects. Results Data showed greatly variable values of cumulative mass across all VLI classes. The potential effect of cumulative mass on damage emerged as not significant ( p value = .6526). Conclusion When comparing VLI_CMM with raw VLI, the former failed to prove itself as a better predictor of LBP risk. Application To recognize cumulative mass as a modifier, especially for lumbar degenerative spine diseases, authors of future studies should investigate potential association between the VLI and other damage variables.

  15. Session: What do we know about cumulative or population impacts

    Energy Technology Data Exchange (ETDEWEB)

    Kerlinger, Paul; Manville, Al; Kendall, Bill

    2004-09-01

    This session at the Wind Energy and Birds/Bats workshop consisted of a panel discussion followed by a discussion/question and answer period. The panelists were Paul Kerlinger, Curry and Kerlinger, LLC, Al Manville, U.S. Fish and Wildlife Service, and Bill Kendall, US Geological Service. The panel addressed the potential cumulative impacts of wind turbines on bird and bat populations over time. Panel members gave brief presentations that touched on what is currently known, what laws apply, and the usefulness of population modeling. Topics addressed included which sources of modeling should be included in cumulative impacts, comparison of impacts from different modes of energy generation, as well as what research is still needed regarding cumulative impacts of wind energy development on bird and bat populations.

  16. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  17. Sum rule limitations of kinetic particle-production models

    International Nuclear Information System (INIS)

    Knoll, J.; CEA Centre d'Etudes Nucleaires de Grenoble, 38; Guet, C.

    1988-04-01

    Photoproduction and absorption sum rules generalized to systems at finite temperature provide a stringent check on the validity of kinetic models for the production of hard photons in intermediate energy nuclear collisions. We inspect such models for the case of nuclear matter at finite temperature employed in a kinetic regime which copes those encountered in energetic nuclear collisions, and find photon production rates which significantly exceed the limits imposed by the sum rule even under favourable concession. This suggests that coherence effects are quite important and the production of photons cannot be considered as an incoherent addition of individual NNγ production processes. The deficiencies of present kinetic models may also apply for the production of probes such as the pion which do not couple perturbatively to the nuclear currents. (orig.)

  18. Cumulative irritation potential of topical retinoid formulations.

    Science.gov (United States)

    Leyden, James J; Grossman, Rachel; Nighland, Marge

    2008-08-01

    Localized irritation can limit treatment success with topical retinoids such as tretinoin and adapalene. The factors that influence irritant reactions have been shown to include individual skin sensitivity, the particular retinoid and concentration used, and the vehicle formulation. To compare the cutaneous tolerability of tretinoin 0.04% microsphere gel (TMG) with that of adapalene 0.3% gel and a standard tretinoin 0.025% cream. The results of 2 randomized, investigator-blinded studies of 2 to 3 weeks' duration, which utilized a split-face method to compare cumulative irritation scores induced by topical retinoids in subjects with healthy skin, were combined. Study 1 compared TMG 0.04% with adapalene 0.3% gel over 2 weeks, while study 2 compared TMG 0.04% with tretinoin 0.025% cream over 3 weeks. In study 1, TMG 0.04% was associated with significantly lower cumulative scores for erythema, dryness, and burning/stinging than adapalene 0.3% gel. However, in study 2, there were no significant differences in cumulative irritation scores between TMG 0.04% and tretinoin 0.025% cream. Measurements of erythema by a chromameter showed no significant differences between the test formulations in either study. Cutaneous tolerance of TMG 0.04% on the face was superior to that of adapalene 0.3% gel and similar to that of a standard tretinoin cream containing a lower concentration of the drug (0.025%).

  19. Tests of Cumulative Prospect Theory with graphical displays of probability

    Directory of Open Access Journals (Sweden)

    Michael H. Birnbaum

    2008-10-01

    Full Text Available Recent research reported evidence that contradicts cumulative prospect theory and the priority heuristic. The same body of research also violates two editing principles of original prospect theory: cancellation (the principle that people delete any attribute that is the same in both alternatives before deciding between them and combination (the principle that people combine branches leading to the same consequence by adding their probabilities. This study was designed to replicate previous results and to test whether the violations of cumulative prospect theory might be eliminated or reduced by using formats for presentation of risky gambles in which cancellation and combination could be facilitated visually. Contrary to the idea that decision behavior contradicting cumulative prospect theory and the priority heuristic would be altered by use of these formats, however, data with two new graphical formats as well as fresh replication data continued to show the patterns of evidence that violate cumulative prospect theory, the priority heuristic, and the editing principles of combination and cancellation. Systematic violations of restricted branch independence also contradicted predictions of ``stripped'' prospect theory (subjectively weighted additive utility without the editing rules.

  20. CROSSER - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.

  1. On the coupling of statistic sum of canonical and large canonical ensemble of interacting particles

    International Nuclear Information System (INIS)

    Vall, A.N.

    2000-01-01

    Potentiality of refining the known result based on analytic properties of a great statistical sum, as a function of the absolute activity of the boundary integral contribution into statistical sum, is considered. A strict asymptotic ratio between statistical sums of canonical and large canonical ensemble of interacting particles was derived [ru

  2. A comparison of surveillance methods for small incidence rates

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Woodall, William H.; Reynolds, Marion R.

    2008-05-15

    A number of methods have been proposed to detect an increasing shift in the incidence rate of a rare health event, such as a congenital malformation. Among these are the Sets method, two modifcations of the Sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the performance of the Sets method and its modifcations to the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria.We used the steady- state average run length to measure chart performance instead of the average run length which was used in nearly all previous comparisons involving the Sets method or its modifcations. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state average run length performance than the Sets method and its modifcations for the extensive number of cases considered.

  3. Coincidence summing corrections for positron emitters in germanium gamma spectrometry

    International Nuclear Information System (INIS)

    Richardson, A.E.; Sallee, W.W.; New Mexico State Univ., Las Cruces

    1990-01-01

    For positron emitters, 511 keV annihilation quanta are in coincidence with other gamma rays in the decay scheme. If the positrons are not localized at the point of decay, annihilation quanta will be produced at a site some distance from the point of emission. The magnitude of the summing coincidence effect will depend upon the position of annihilation. A method for determining the magnitude of the summing effect for a single gamma of energy E in coincidence with the annihilation gammas from non-localized positrons has been developed which makes use of the counting data for the full energy peaks for both the gamma ray (E) and the 511 keV annihilation gammas. With this data and efficiency calibration data one can determine the average total efficiency for the annihilation positions from which 511 keV gammas originate, and thereby obtain the summing correction factor, SCF, for gamma ray (E). Application of the method to a 22 Na NIST standard gave excellent agreement of observed emission rates for the 1275 keV gamma with the NIST value for wide ranging degrees of positron localization having summing correction factors ranging from 1.021 to 1.505. The method was also applied successfully to 58 Co in neutron-irradiated nickel foils. The method shows promise as a check on the accuracy of the efficiency calibration for a particular detector geometry at the 511 keV energy and energies for other gammas associated with positron emission. (orig.)

  4. Lattice sums then and now

    CERN Document Server

    Borwein, J M; McPhedran, R C

    2013-01-01

    The study of lattice sums began when early investigators wanted to go from mechanical properties of crystals to the properties of the atoms and ions from which they were built (the literature of Madelung's constant). A parallel literature was built around the optical properties of regular lattices of atoms (initiated by Lord Rayleigh, Lorentz and Lorenz). For over a century many famous scientists and mathematicians have delved into the properties of lattices, sometimes unwittingly duplicating the work of their predecessors. Here, at last, is a comprehensive overview of the substantial body of

  5. Complexity and demographic explanations of cumulative culture.

    Directory of Open Access Journals (Sweden)

    Adrien Querbes

    Full Text Available Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  6. Bulk stress auto-correlation function in simple liquids-sum rules

    International Nuclear Information System (INIS)

    Tankeshwar, K.; Bhandari, R.; Pathak, K.N.

    1990-10-01

    Expressions for the zeroth, second and fourth frequency sum rules of the bulk stress auto correlation function have been derived. The exact expressions involve static correlation function up to four particles. Because of the non availability of any information about static quadruplet correlation function we use a low order decoupling approximation for this. In this work, we have obtained, separately, the sum rules for the different mechanism of momentum transfer in the fluids. The results are expected to be useful in the study of bulk viscosity of the fluids. (author). 9 refs

  7. Fast local fragment chaining using sum-of-pair gap costs

    DEFF Research Database (Denmark)

    Otto, Christian; Hoffmann, Steve; Gorodkin, Jan

    2011-01-01

    , and rank the fragments to improve the specificity. Results: Here we present a fast and flexible fragment chainer that for the first time also supports a sum-of-pair gap cost model. This model has proven to achieve a higher accuracy and sensitivity in its own field of application. Due to a highly time...... alignment heuristics alone. By providing both the linear and the sum-of-pair gap cost model, a wider range of application can be covered. The software clasp is available at http://www.bioinf.uni-leipzig.de/Software/clasp/....

  8. A Global Optimization Algorithm for Sum of Linear Ratios Problem

    Directory of Open Access Journals (Sweden)

    Yuelin Gao

    2013-01-01

    Full Text Available We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.

  9. Family Resources and Effects on Child Behavior Problem Interventions: A Cumulative Risk Approach.

    Science.gov (United States)

    Tømmerås, Truls; Kjøbli, John

    2017-01-01

    Family resources have been associated with health care inequality in general and with social gradients in treatment outcomes for children with behavior problems. However, there is limited evidence concerning cumulative risk-the accumulation of social and economic disadvantages in a family-and whether cumulative risk moderates the outcomes of evidence-based parent training interventions. We used data from two randomized controlled trials evaluating high-intensity ( n  = 137) and low-intensity ( n  = 216) versions of Parent Management Training-Oregon (PMTO) with a 50:50 allocation between participants receiving PMTO interventions or regular care. A nine-item family cumulative risk index tapping socioeconomic resources and parental health was constructed to assess the family's exposure to risk. Autoregressive structured equation models (SEM) were run to investigate whether cumulative risk moderated child behaviors at post-treatment and follow-up (6 months). Our results showed opposite social gradients for the treatment conditions: the children exposed to cumulative risk in a pooled sample of both PMTO groups displayed lower levels of behavior problems, whereas children with identical risk exposures who received regular care experienced more problems. Furthermore, our results indicated that the social gradients differed between PMTO interventions: children exposed to cumulative risk in the low-intensity (five sessions) Brief Parent Training fared equally well as their high-resource counterparts, whereas children exposed to cumulative risk in the high-intensity PMTO (12 sessions) experienced vastly better treatment effects. Providing evidence-based parent training seem to be an effective way to counteract health care inequality, and the more intensive PMTO treatment seemed to be a particularly effective way to help families with cumulative risk.

  10. Analysis of QCD sum rule based on the maximum entropy method

    International Nuclear Information System (INIS)

    Gubler, Philipp

    2012-01-01

    QCD sum rule was developed about thirty years ago and has been used up to the present to calculate various physical quantities like hadrons. It has been, however, needed to assume 'pole + continuum' for the spectral function in the conventional analyses. Application of this method therefore came across with difficulties when the above assumption is not satisfied. In order to avoid this difficulty, analysis to make use of the maximum entropy method (MEM) has been developed by the present author. It is reported here how far this new method can be successfully applied. In the first section, the general feature of the QCD sum rule is introduced. In section 2, it is discussed why the analysis by the QCD sum rule based on the MEM is so effective. In section 3, the MEM analysis process is described, and in the subsection 3.1 likelihood function and prior probability are considered then in subsection 3.2 numerical analyses are picked up. In section 4, some cases of applications are described starting with ρ mesons, then charmoniums in the finite temperature and finally recent developments. Some figures of the spectral functions are shown. In section 5, summing up of the present analysis method and future view are given. (S. Funahashi)

  11. In-medium QCD sum rules for {omega} meson, nucleon and D meson

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Ronny

    2008-07-01

    The modifications of hadronic properties caused by an ambient nuclear medium are investigated within the scope of QCD sum rules. This is exemplified for the cases of the {omega} meson, the nucleon and the D meson. By virtue of the sum rules, integrated spectral densities of these hadrons are linked to properties of the QCD ground state, quantified in condensates. For the cases of the {omega} meson and the nucleon it is discussed how the sum rules allow a restriction of the parameter range of poorly known four-quark condensates by a comparison of experimental and theoretical knowledge. The catalog of independent four-quark condensates is covered and relations among these condensates are revealed. The behavior of four-quark condensates under the chiral symmetry group and the relation to order parameters of spontaneous chiral symmetry breaking are outlined. In this respect, also the QCD condensates appearing in differences of sum rules of chiral partners are investigated. Finally, the effects of an ambient nuclear medium on the D meson are discussed and relevant condensates are identified. (orig.)

  12. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  13. One-Loop BPS amplitudes as BPS-state sums

    CERN Document Server

    Angelantonj, Carlo; Pioline, Boris

    2012-01-01

    Recently, we introduced a new procedure for computing a class of one-loop BPS-saturated amplitudes in String Theory, which expresses them as a sum of one-loop contributions of all perturbative BPS states in a manifestly T-duality invariant fashion. In this paper, we extend this procedure to all BPS-saturated amplitudes of the form \\int_F \\Gamma_{d+k,d} {\\Phi}, with {\\Phi} being a weak (almost) holomorphic modular form of weight -k/2. We use the fact that any such {\\Phi} can be expressed as a linear combination of certain absolutely convergent Poincar\\'e series, against which the fundamental domain F can be unfolded. The resulting BPS-state sum neatly exhibits the singularities of the amplitude at points of gauge symmetry enhancement, in a chamber-independent fashion. We illustrate our method with concrete examples of interest in heterotic string compactifications.

  14. A discrete variational identity on semi-direct sums of Lie algebras

    Energy Technology Data Exchange (ETDEWEB)

    M, Wenxiu [Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620-5700 (United States)

    2007-12-14

    The discrete variational identity under general bilinear forms on semi-direct sums of Lie algebras is established. The constant {gamma} involved in the variational identity is determined through the corresponding solution to the stationary discrete zero-curvature equation. An application of the resulting variational identity to a class of semi-direct sums of Lie algebras in the Volterra lattice case furnishes Hamiltonian structures for the associated integrable couplings of the Volterra lattice hierarchy.

  15. A discrete variational identity on semi-direct sums of Lie algebras

    International Nuclear Information System (INIS)

    M, Wenxiu

    2007-01-01

    The discrete variational identity under general bilinear forms on semi-direct sums of Lie algebras is established. The constant γ involved in the variational identity is determined through the corresponding solution to the stationary discrete zero-curvature equation. An application of the resulting variational identity to a class of semi-direct sums of Lie algebras in the Volterra lattice case furnishes Hamiltonian structures for the associated integrable couplings of the Volterra lattice hierarchy

  16. The partially alternating ternary sum in an associative dialgebra

    International Nuclear Information System (INIS)

    Bremner, Murray R; Sanchez-Ortega, Juana

    2010-01-01

    The alternating ternary sum in an associative algebra, abc - acb - bac + bca + cab - cba, gives rise to the partially alternating ternary sum in an associative dialgebra with products dashv and vdash by making the argument a the center of each term. We use computer algebra to determine the polynomial identities in degree ≤9 satisfied by this new trilinear operation. In degrees 3 and 5, these identities define a new variety of partially alternating ternary algebras. We show that there is a 49-dimensional space of multilinear identities in degree 7, and we find equivalent nonlinear identities. We use the representation theory of the symmetric group to show that there are no new identities in degree 9.

  17. Implications of applying cumulative risk assessment to the workplace.

    Science.gov (United States)

    Fox, Mary A; Spicer, Kristen; Chosewood, L Casey; Susi, Pam; Johns, Douglas O; Dotson, G Scott

    2018-06-01

    Multiple changes are influencing work, workplaces and workers in the US including shifts in the main types of work and the rise of the 'gig' economy. Work and workplace changes have coincided with a decline in unions and associated advocacy for improved safety and health conditions. Risk assessment has been the primary method to inform occupational and environmental health policy and management for many types of hazards. Although often focused on one hazard at a time, risk assessment frameworks and methods have advanced toward cumulative risk assessment recognizing that exposure to a single chemical or non-chemical stressor rarely occurs in isolation. We explore how applying cumulative risk approaches may change the roles of workers and employers as they pursue improved health and safety and elucidate some of the challenges and opportunities that might arise. Application of cumulative risk assessment should result in better understanding of complex exposures and health risks with the potential to inform more effective controls and improved safety and health risk management overall. Roles and responsibilities of both employers and workers are anticipated to change with potential for a greater burden of responsibility on workers to address risk factors both inside and outside the workplace that affect health at work. A range of policies, guidance and training have helped develop cumulative risk assessment for the environmental health field and similar approaches are available to foster the practice in occupational safety and health. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. On the density of the sum of two independent Student t-random vectors

    DEFF Research Database (Denmark)

    Berg, Christian; Vignat, Christophe

    2010-01-01

    -vector. In both cases the density is given as an infinite series $\\sum_{n=0}^\\infty c_nf_n$ where f_n is a sequence of probability densities on R^d and c_n is a sequence of positive numbers of sum 1, i.e. the distribution of a non-negative integer-valued random variable C, which turns out to be infinitely......In this paper, we find an expression for the density of the sum of two independent d-dimensional Student t-random vectors X and Y with arbitrary degrees of freedom. As a byproduct we also obtain an expression for the density of the sum N+X, where N is normal and X is an independent Student t...... divisible for d=1 and d=2.  When d=1 and the degrees of freedom of the Student variables are equal, we recover an old result of Ruben.  ...

  19. Large-Nc quantum chromodynamics and harmonic sums

    Indian Academy of Sciences (India)

    In the large- limit of QCD, two-point functions of local operators become harmonic sums. I review some properties which follow from this fact and which are relevant for phenomenological applications. This has led us to consider a class of analytic number theory functions as toy models of large- QCD which also is ...

  20. On the Latent Variable Interpretation in Sum-Product Networks.

    Science.gov (United States)

    Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro

    2017-10-01

    One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.

  1. Evaluating chiral symmetry restoration through the use of sum rules

    Directory of Open Access Journals (Sweden)

    Rapp Ralf

    2012-11-01

    Full Text Available We pursue the idea of assessing chiral restoration via in-medium modifications of hadronic spectral functions of chiral partners. The usefulness of sum rules in this endeavor is illustrated, focusing on the vector/axial-vector channel. We first present an update on obtaining quantitative results for pertinent vacuum spectral functions. These serve as a basis upon which the in-medium spectral functions can be constructed. A novel feature of our analysis of the vacuum spectral functions is the need to include excited resonances, dictated by satisfying the Weinberg-type sum rules. This includes excited states in both the vector and axial-vector channels.We also analyze the QCD sum rule for the finite temperature vector spectral function, based on a ρ spectral function tested in dilepton data which develops a shoulder at low energies.We find that the ρ′ peak flattens off which may be a sign of chiral restoration, though a study of the finite temperature axial-vector spectral function remains to be carried out.

  2. Relativistic and Nuclear Medium Effects on the Coulomb Sum Rule.

    Science.gov (United States)

    Cloët, Ian C; Bentz, Wolfgang; Thomas, Anthony W

    2016-01-22

    In light of the forthcoming high precision quasielastic electron scattering data from Jefferson Lab, it is timely for the various approaches to nuclear structure to make robust predictions for the associated response functions. With this in mind, we focus here on the longitudinal response function and the corresponding Coulomb sum rule for isospin-symmetric nuclear matter at various baryon densities. Using a quantum field-theoretic quark-level approach which preserves the symmetries of quantum chromodynamics, as well as exhibiting dynamical chiral symmetry breaking and quark confinement, we find a dramatic quenching of the Coulomb sum rule for momentum transfers |q|≳0.5  GeV. The main driver of this effect lies in changes to the proton Dirac form factor induced by the nuclear medium. Such a dramatic quenching of the Coulomb sum rule was not seen in a recent quantum Monte Carlo calculation for carbon, suggesting that the Jefferson Lab data may well shed new light on the explicit role of QCD in nuclei.

  3. Endpoint behavior of the pion distribution amplitude in QCD sum rules with nonlocal condensates

    International Nuclear Information System (INIS)

    Mikhailov, S. V.; Pimikov, A. V.; Stefanis, N. G.

    2010-01-01

    Starting from the QCD sum rules with nonlocal condensates for the pion distribution amplitude, we derive another sum rule for its derivative and its ''integral derivatives''--defined in this work. We use this new sum rule to analyze the fine details of the pion distribution amplitude in the endpoint region x∼0. The results for endpoint-suppressed and flattop (or flatlike) pion distribution amplitudes are compared with those we obtained with differential sum rules by employing two different models for the distribution of vacuum-quark virtualities. We determine the range of values of the derivatives of the pion distribution amplitude and show that endpoint-suppressed distribution amplitudes lie within this range, while those with endpoint enhancement--flat-type or Chernyak-Zhitnitsky like--yield values outside this range.

  4. Transmission fidelity is the key to the build-up of cumulative culture.

    Science.gov (United States)

    Lewis, Hannah M; Laland, Kevin N

    2012-08-05

    Many animals have socially transmitted behavioural traditions, but human culture appears unique in that it is cumulative, i.e. human cultural traits increase in diversity and complexity over time. It is often suggested that high-fidelity cultural transmission is necessary for cumulative culture to occur through refinement, a process known as 'ratcheting', but this hypothesis has never been formally evaluated. We discuss processes of information transmission and loss of traits from a cognitive viewpoint alongside other cultural processes of novel invention (generation of entirely new traits), modification (refinement of existing traits) and combination (bringing together two established traits to generate a new trait). We develop a simple cultural transmission model that does not assume major evolutionary changes (e.g. in brain architecture) and show that small changes in the fidelity with which information is passed between individuals can lead to cumulative culture. In comparison, modification and combination have a lesser influence on, and novel invention appears unimportant to, the ratcheting process. Our findings support the idea that high-fidelity transmission is the key driver of human cumulative culture, and that progress in cumulative culture depends more on trait combination than novel invention or trait modification.

  5. A paradox of cumulative culture.

    Science.gov (United States)

    Kobayashi, Yutaka; Wakano, Joe Yuichiro; Ohtsuki, Hisashi

    2015-08-21

    Culture can grow cumulatively if socially learnt behaviors are improved by individual learning before being passed on to the next generation. Previous authors showed that this kind of learning strategy is unlikely to be evolutionarily stable in the presence of a trade-off between learning and reproduction. This is because culture is a public good that is freely exploited by any member of the population in their model (cultural social dilemma). In this paper, we investigate the effect of vertical transmission (transmission from parents to offspring), which decreases the publicness of culture, on the evolution of cumulative culture in both infinite and finite population models. In the infinite population model, we confirm that culture accumulates largely as long as transmission is purely vertical. It turns out, however, that introduction of even slight oblique transmission drastically reduces the equilibrium level of culture. Even more surprisingly, if the population size is finite, culture hardly accumulates even under purely vertical transmission. This occurs because stochastic extinction due to random genetic drift prevents a learning strategy from accumulating enough culture. Overall, our theoretical results suggest that introducing vertical transmission alone does not really help solve the cultural social dilemma problem. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Numerical Radius Inequalities for Finite Sums of Operators

    Directory of Open Access Journals (Sweden)

    Mirmostafaee Alireza Kamel

    2014-12-01

    Full Text Available In this paper, we obtain some sharp inequalities for numerical radius of finite sums of operators. Moreover, we give some applications of our result in estimation of spectral radius. We also compare our results with some known results.

  7. SUMS preliminary design and data analysis development. [shuttle upper atmosphere mass spectrometer experiment

    Science.gov (United States)

    Hinson, E. W.

    1981-01-01

    The preliminary analysis and data analysis system development for the shuttle upper atmosphere mass spectrometer (SUMS) experiment are discussed. The SUMS experiment is designed to provide free stream atmospheric density, pressure, temperature, and mean molecular weight for the high altitude, high Mach number region.

  8. Structure functions and particle production in the cumulative region: two different exponentials

    International Nuclear Information System (INIS)

    Braun, M.; Vechernin, V.

    1997-01-01

    In the framework of the recently proposed (QCD-based parton model for the cumulative phenomena in the interactions with nuclei two mechanisms for particle production, direct and spectator ones, are analyzed. It is shown that due to final-state interactions the leading terms of the direct mechanism contribution are cancelled and the spectator mechanism is the dominant one. It leads to a smaller slope of the cumulative particle production rates compared to the slope of the nuclear structure function in the cumulative region x ≥ 1, in agreement with the recent experimental data

  9. The Relation between the Electric Conductance of Nanostructure Bridge and Friedel Sum Rule

    International Nuclear Information System (INIS)

    Kotani, Y; Shima, N; Makoshi, K

    2012-01-01

    We analyze the electric conductance through nanostructure bridges in terms of phase-shifts, which satisfy the Friedel sum rule. The phase-shifts are given by solving the eigenvalue equation obtained by extending the method applied to a single impurity problem in a metal. The local charge neutrality condition is introduced through the Friedel sum rule. It is analytically shown that the electric conductance can increase as the two electrodes separate with the condition in which the phase-shifts satisfy the Friedel sum rule. The increment of the distance between two electrodes is obtained by gradually separating interatomic distance.

  10. Minimizing Sum-MSE Implies Identical Downlink and Dual Uplink Power Allocations

    OpenAIRE

    Tenenbaum, Adam J.; Adve, Raviraj S.

    2009-01-01

    In the multiuser downlink, power allocation for linear precoders that minimize the sum of mean squared errors under a sum power constraint is a non-convex problem. Many existing algorithms solve an equivalent convex problem in the virtual uplink and apply a transformation based on uplink-downlink duality to find a downlink solution. In this letter, we analyze the optimality criteria for the power allocation subproblem in the virtual uplink, and demonstrate that the optimal solution leads to i...

  11. Which Basic Rules Underlie Social Judgments? Agency Follows a Zero-Sum Principle and Communion Follows a Non-Zero-Sum Principle.

    Science.gov (United States)

    Dufner, Michael; Leising, Daniel; Gebauer, Jochen E

    2016-05-01

    How are people who generally see others positively evaluated themselves? We propose that the answer to this question crucially hinges on the content domain: We hypothesize that Agency follows a "zero-sum principle" and therefore people who see others ashighin Agency are perceived aslowin Agency themselves. In contrast, we hypothesize that Communion follows a "non-zero-sum principle" and therefore people who see others ashighin Communion are perceived ashighin Communion themselves. We tested these hypotheses in a round-robin and a half-block study. Perceiving others as agentic was indeed linked to being perceived as low in Agency. To the contrary, perceiving others as communal was linked to being perceived as high in Communion, but only when people directly interacted with each other. These results help to clarify the nature of Agency and Communion and offer explanations for divergent findings in the literature. © 2016 by the Society for Personality and Social Psychology, Inc.

  12. Summing threshold logs in a parton shower

    International Nuclear Information System (INIS)

    Nagy, Zoltan; Soper, Davison E.

    2016-05-01

    When parton distributions are falling steeply as the momentum fractions of the partons increases, there are effects that occur at each order in α s that combine to affect hard scattering cross sections and need to be summed. We show how to accomplish this in a leading approximation in the context of a parton shower Monte Carlo event generator.

  13. Complexity and demographic explanations of cumulative culture

    NARCIS (Netherlands)

    Querbes, A.; Vaesen, K.; Houkes, W.N.

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological

  14. Fragmentation of tensor polarized deuterons into cumulative pions

    International Nuclear Information System (INIS)

    Afanas'ev, S.; Arkhipov, V.; Bondarev, V.

    1998-01-01

    The tensor analyzing power T 20 of the reaction d polarized + A → π - (0 0 ) + X has been measured in the fragmentation of 9 GeV tensor polarized deuterons into pions with momenta from 3.5 to 5.3 GeV/c on hydrogen, beryllium and carbon targets. This kinematic range corresponds to the region of cumulative hadron production with the cumulative variable x c from 1.08 to 1.76. The values of T 20 have been found to be small and consistent with positive values. This contradicts the predictions based on a direct mechanism assuming NN collision between a high momentum nucleon in the deuteron and a target nucleon (NN → NNπ)

  15. Cumulants of heat transfer across nonlinear quantum systems

    Science.gov (United States)

    Li, Huanan; Agarwalla, Bijay Kumar; Li, Baowen; Wang, Jian-Sheng

    2013-12-01

    We consider thermal conduction across a general nonlinear phononic junction. Based on two-time observation protocol and the nonequilibrium Green's function method, heat transfer in steady-state regimes is studied, and practical formulas for the calculation of the cumulant generating function are obtained. As an application, the general formalism is used to study anharmonic effects on fluctuation of steady-state heat transfer across a single-site junction with a quartic nonlinear on-site pinning potential. An explicit nonlinear modification to the cumulant generating function exact up to the first order is given, in which the Gallavotti-Cohen fluctuation symmetry is found still valid. Numerically a self-consistent procedure is introduced, which works well for strong nonlinearity.

  16. Proof of Kochen–Specker Theorem: Conversion of Product Rule to Sum Rule

    International Nuclear Information System (INIS)

    Toh, S.P.; Zainuddin, Hishamuddin

    2009-01-01

    Valuation functions of observables in quantum mechanics are often expected to obey two constraints called the sum rule and product rule. However, the Kochen–Specker (KS) theorem shows that for a Hilbert space of quantum mechanics of dimension d ≤ 3, these constraints contradict individually with the assumption of value definiteness. The two rules are not irrelated and Peres [Found. Phys. 26 (1996) 807] has conceived a method of converting the product rule into a sum rule for the case of two qubits. Here we apply this method to a proof provided by Mermin based on the product rule for a three-qubit system involving nine operators. We provide the conversion of this proof to one based on sum rule involving ten operators. (general)

  17. 24 CFR 570.513 - Lump sum drawdown for financing of property rehabilitation activities.

    Science.gov (United States)

    2010-04-01

    ... DEVELOPMENT BLOCK GRANTS Grant Administration § 570.513 Lump sum drawdown for financing of property... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Lump sum drawdown for financing of property rehabilitation activities. 570.513 Section 570.513 Housing and Urban Development Regulations...

  18. Efficient yellow beam generation by intracavity sum frequency ...

    Indian Academy of Sciences (India)

    2014-02-06

    Feb 6, 2014 ... petition leading to instability in the output sum frequency power and ... Nd:YVO4 crystal has been identified as one of the promising laser materials for diode ... very important to achieve small laser mode size as well as proper ...

  19. Partial sums of arithmetical functions with absolutely convergent ...

    Indian Academy of Sciences (India)

    For an arithmetical function f with absolutely convergent Ramanujan expansion, we derive an asymptotic formula for the ∑ n ≤ N f(n)$ with explicit error term. As a corollary we obtain new results about sum-of-divisors functions and Jordan's totient functions.

  20. Playing a zero-sum game?

    DEFF Research Database (Denmark)

    Triantafillou, Peter

    2017-01-01

    Supreme audit institutions (SAIs) are fundamental institutions in liberal democracies as they enable control of the exercise of state power. In order to maintain this function, SAIs must enjoy a high level of independence. Moreover, SAIs are increasingly expected to be also relevant for government...... and the execution of its policies by way of performance auditing. This article examines how and why the performance auditing of the Danish SAI pursues independence and relevance. It is argued that, in general, the simultaneous pursuit of independence and relevance is highly challenging and amounts to a zero-sum or...