WorldWideScience

Sample records for maximum average correlation

  1. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  2. Correlation between maximum isometric strength variables and specific performance of Brazilian military judokas

    Directory of Open Access Journals (Sweden)

    Michel Moraes Gonçalves

    2017-06-01

    Full Text Available It was our objective to correlate specific performance in the Special Judo Fitness Test (SJFT and the maximum isometric handgrip (HGSMax, scapular traction (STSMax and lumbar traction (LTSMax strength tests in military judo athletes. Twenty-two military athletes from the judo team of the Brazilian Navy Almirante Alexandrino Instruction Centre, with average age of 26.14 ± 3.31 years old, and average body mass of 83.23 ± 14.14 kg participated in the study. Electronic dynamometry tests for HGSMax, STSMax and LTSMax were conducted. Then, after approximately 1 hour-interval, the SJFT protocol was applied. All variables were adjusted to the body mass of the athletes. Pearson correlation coefficient for statistical analysis was used. The results showed moderate negative correlation between the SJFT index and STSMax (r= -0.550, p= 0.008, strong negative correlations between the SJFT index and HGSMax (r= -0.706, p< 0.001, SJFT index and LTSMax (r= -0.721; p= 0.001, besides the correlation between the sum of the three maximum isometric strength tests and the SJFT index (r= -0.786, p< 0.001. This study concludes that negative correlations occur between the SJFT index and maximum isometric handgrip, shoulder and lumbar traction strength and the sum of the three maximum isometric strength tests in military judokas.

  3. Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study

    Directory of Open Access Journals (Sweden)

    Mary Hokazono

    Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.

  4. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  5. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  6. SU-E-T-174: Evaluation of the Optimal Intensity Modulated Radiation Therapy Plans Done On the Maximum and Average Intensity Projection CTs

    Energy Technology Data Exchange (ETDEWEB)

    Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)

    2014-06-01

    Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.

  7. Correlation between maximum phonetically balanced word recognition score and pure-tone auditory threshold in elder presbycusis patients over 80 years old.

    Science.gov (United States)

    Deng, Xin-Sheng; Ji, Fei; Yang, Shi-Ming

    2014-02-01

    The maximum phonetically balanced word recognition score (PBmax) showed poor correlation with pure-tone thresholds in presbycusis patients older than 80 years. To study the characteristics of monosyllable recognition in presbycusis patients older than 80 years of age. Thirty presbycusis patients older than 80 years were included as the test group (group 80+). Another 30 patients aged 60-80 years were selected as the control group (group 80-) . PBmax was tested by Mandarin monosyllable recognition test materials with the signal level at 30 dB above the averaged thresholds of 0.5, 1, 2, and 4 kHz (4FA) or the maximum comfortable level. The PBmax values of the test group and control group were compared with each other and the correlation between PBmax and predicted maximum speech recognition scores based on 4FA (PBmax-predict) were statistically analyzed. Under the optimal test conditions, the averaged PBmax was (77.3 ± 16.7) % for group 80- and (52.0 ± 25.4) % for group 80+ (p < 0.001). The PBmax of group 80- was significantly correlated with PBmax-predict (Spearman correlation = 0.715, p < 0.001). The score for group 80+ was less statistically correlated with PBmax-predict (Spearman correlation = 0.572, p = 0.001).

  8. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  9. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  10. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  11. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  12. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  13. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  14. The classical correlation limits the ability of the measurement-induced average coherence

    Science.gov (United States)

    Zhang, Jun; Yang, Si-Ren; Zhang, Yang; Yu, Chang-Shui

    2017-04-01

    Coherence is the most fundamental quantum feature in quantum mechanics. For a bipartite quantum state, if a measurement is performed on one party, the other party, based on the measurement outcomes, will collapse to a corresponding state with some probability and hence gain the average coherence. It is shown that the average coherence is not less than the coherence of its reduced density matrix. In particular, it is very surprising that the extra average coherence (and the maximal extra average coherence with all the possible measurements taken into account) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation. We also find the sufficient and necessary condition for the null maximal extra average coherence. Some examples demonstrate the relation and, moreover, show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement. In addition, the similar conclusions are drawn for both the basis-dependent and the basis-free coherence measure.

  15. Minimum disturbance rewards with maximum possible classical correlations

    Energy Technology Data Exchange (ETDEWEB)

    Pande, Varad R., E-mail: varad_pande@yahoo.in [Department of Physics, Indian Institute of Science Education and Research Pune, 411008 (India); Shaji, Anil [School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, 695016 (India)

    2017-07-12

    Weak measurements done on a subsystem of a bipartite system having both classical and nonClassical correlations between its components can potentially reveal information about the other subsystem with minimal disturbance to the overall state. We use weak quantum discord and the fidelity between the initial bipartite state and the state after measurement to construct a cost function that accounts for both the amount of information revealed about the other system as well as the disturbance to the overall state. We investigate the behaviour of the cost function for families of two qubit states and show that there is an optimal choice that can be made for the strength of the weak measurement. - Highlights: • Weak measurements done on one part of a bipartite system with controlled strength. • Weak quantum discord & fidelity used to quantify all correlations and disturbance. • Cost function to probe the tradeoff between extracted correlations and disturbance. • Optimal measurement strength for maximum extraction of classical correlations.

  16. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  17. Table for monthly average daily extraterrestrial irradiation on horizontal surface and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-01-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)

  18. Speed Estimation in Geared Wind Turbines Using the Maximum Correlation Coefficient

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Marhadi, Kun S.; Jensen, Bogi Bech

    2015-01-01

    to overcome the above mentioned issues. The high speed stage shaft angular velocity is calculated based on the maximum correlation coefficient between the 1 st gear mesh frequency of the last gearbox stage and a pure sinus tone of known frequency and phase. The proposed algorithm utilizes vibration signals...

  19. Nonlinear correlations in the hydrophobicity and average flexibility along the glycolytic enzymes sequences

    Energy Technology Data Exchange (ETDEWEB)

    Ciorsac, Alecu, E-mail: aleciorsac@yahoo.co [Politehnica University of Timisoara, Department of Physical Education and Sport, 2 P-ta Victoriei, 300006, Timisoara (Romania); Craciun, Dana, E-mail: craciundana@gmail.co [Teacher Training Department, West University of Timisoara, 4 Boulevard V. Pirvan, Timisoara, 300223 (Romania); Ostafe, Vasile, E-mail: vostafe@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania); Isvoran, Adriana, E-mail: aisvoran@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania)

    2011-04-15

    Research highlights: lights: We focus our study on the glycolytic enzymes. We reveal correlation of hydrophobicity and flexibility along their chains. We also reveal fractal aspects of the glycolytic enzymes structures and surfaces. The glycolytic enzyme sequences are not random. Creation of fractal structures requires the operation of nonlinear dynamics. - Abstract: Nonlinear methods widely used for time series analysis were applied to glycolytic enzyme sequences to derive information concerning the correlation of hydrophobicity and average flexibility along their chains. The 20 sequences of different types of the 10 human glycolytic enzymes were considered as spatial series and were analyzed by spectral analysis, detrended fluctuations analysis and Hurst coefficient calculation. The results agreed that there are both short range and long range correlations of hydrophobicity and average flexibility within investigated sequences, the short range correlations being stronger and indicating that local interactions are the most important for the protein folding. This correlation is also reflected by the fractal nature of the structures of investigated proteins.

  20. Correlations between PANCE performance, physician assistant program grade point average, and selection criteria.

    Science.gov (United States)

    Brown, Gina; Imel, Brittany; Nelson, Alyssa; Hale, LaDonna S; Jansen, Nick

    2013-01-01

    The purpose of this study was to examine correlations between first-time Physician Assistant National Certifying Exam (PANCE) scores and pass/fail status, physician assistant (PA) program didactic grade point average (GPA), and specific selection criteria. This retrospective study evaluated graduating classes from 2007, 2008, and 2009 at a single program (N = 119). There was no correlation between PANCE performance and undergraduate grade point average (GPA), science prerequisite GPA, or health care experience. There was a moderate correlation between PANCE pass/fail and where students took science prerequisites (r = 0.27, P = .003) but not with the PANCE score. PANCE scores were correlated with overall PA program GPA (r = 0.67), PA pharmacology grade (r = 0.68), and PA anatomy grade (r = 0.41) but not with PANCE pass/fail. Correlations between selection criteria and PANCE performance were limited, but further research regarding the influence of prerequisite institution type may be warranted and may improve admission decisions. PANCE scores and PA program GPA correlations may guide academic advising and remediation decisions for current students.

  1. Three dimensional winds: A maximum cross-correlation application to elastic lidar data

    Energy Technology Data Exchange (ETDEWEB)

    Buttler, William Tillman [Univ. of Texas, Austin, TX (United States)

    1996-05-01

    Maximum cross-correlation techniques have been used with satellite data to estimate winds and sea surface velocities for several years. Los Alamos National Laboratory (LANL) is currently using a variation of the basic maximum cross-correlation technique, coupled with a deterministic application of a vector median filter, to measure transverse winds as a function of range and altitude from incoherent elastic backscatter lidar (light detection and ranging) data taken throughout large volumes within the atmospheric boundary layer. Hourly representations of three-dimensional wind fields, derived from elastic lidar data taken during an air-quality study performed in a region of complex terrain near Sunland Park, New Mexico, are presented and compared with results from an Environmental Protection Agency (EPA) approved laser doppler velocimeter. The wind fields showed persistent large scale eddies as well as general terrain-following winds in the Rio Grande valley.

  2. An inequality between the weighted average and the rowwise correlation coefficient for proximity matrices

    NARCIS (Netherlands)

    Krijnen, WP

    De Vries (1993) discusses Pearson's product-moment correlation, Spearman's rank correlation, and Kendall's rank-correlation coefficient for assessing the association between the rows of two proximity matrices. For each of these he introduces a weighted average variant and a rowwise variant. In this

  3. AN INEQUALITY BETWEEN THE WEIGHTED AVERAGE AND THE ROWWISE CORRELATION-COEFFICIENT FOR PROXIMITY MATRICES

    NARCIS (Netherlands)

    KRIJNEN, WP

    De Vries (1993) discusses Pearson's product-moment correlation, Spearman's rank correlation, and Kendall's rank-correlation coefficient for assessing the association between the rows of two proximity matrices. For each of these he introduces a weighted average variant and a rowwise variant. In this

  4. Nonlinear correlations in the hydrophobicity and average flexibility along the glycolytic enzymes sequences

    International Nuclear Information System (INIS)

    Ciorsac, Alecu; Craciun, Dana; Ostafe, Vasile; Isvoran, Adriana

    2011-01-01

    Research highlights: → We focus our study on the glycolytic enzymes. → We reveal correlation of hydrophobicity and flexibility along their chains. → We also reveal fractal aspects of the glycolytic enzymes structures and surfaces. → The glycolytic enzyme sequences are not random. → Creation of fractal structures requires the operation of nonlinear dynamics. - Abstract: Nonlinear methods widely used for time series analysis were applied to glycolytic enzyme sequences to derive information concerning the correlation of hydrophobicity and average flexibility along their chains. The 20 sequences of different types of the 10 human glycolytic enzymes were considered as spatial series and were analyzed by spectral analysis, detrended fluctuations analysis and Hurst coefficient calculation. The results agreed that there are both short range and long range correlations of hydrophobicity and average flexibility within investigated sequences, the short range correlations being stronger and indicating that local interactions are the most important for the protein folding. This correlation is also reflected by the fractal nature of the structures of investigated proteins.

  5. The asymptotic behaviour of the maximum likelihood function of Kriging approximations using the Gaussian correlation function

    CSIR Research Space (South Africa)

    Kok, S

    2012-07-01

    Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...

  6. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    Science.gov (United States)

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  7. Choosing the best index for the average score intraclass correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2016-09-01

    The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.

  8. Strong Solar Control of Infrared Aurora on Jupiter: Correlation Since the Last Solar Maximum

    Science.gov (United States)

    Kostiuk, T.; Livengood, T. A.; Hewagama, T.

    2009-01-01

    Polar aurorae in Jupiter's atmosphere radiate throughout the electromagnetic spectrum from X ray through mid-infrared (mid-IR, 5 - 20 micron wavelength). Voyager IRIS data and ground-based spectroscopic measurements of Jupiter's northern mid-IR aurora, acquired since 1982, reveal a correlation between auroral brightness and solar activity that has not been observed in Jovian aurora at other wavelengths. Over nearly three solar cycles, Jupiter auroral ethane emission brightness and solar 10.7 cm radio flux and sunspot number are positively correlated with high confidence. Ethane line emission intensity varies over tenfold between low and high solar activity periods. Detailed measurements have been made using the GSFC HIPWAC spectrometer at the NASA IRTF since the last solar maximum, following the mid-IR emission through the declining phase toward solar minimum. An even more convincing correlation with solar activity is evident in these data. Current analyses of these results will be described, including planned measurements on polar ethane line emission scheduled through the rise of the next solar maximum beginning in 2009, with a steep gradient to a maximum in 2012. This work is relevant to the Juno mission and to the development of the Europa Jupiter System Mission. Results of observations at the Infrared Telescope Facility (IRTF) operated by the University of Hawaii under Cooperative Agreement no. NCC5-538 with the National Aeronautics and Space Administration, Science Mission Directorate, Planetary Astronomy Program. This work was supported by the NASA Planetary Astronomy Program.

  9. A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    and space. This paper presents a new estimator (STC-MLE), which incorporates the correlation property. It is an expansion of the maximum likelihood estimator (MLE) developed by Ferrara et al. With the MLE a cross-correlation analysis between consecutive RF-lines on complex form is carried out for a range...... of possible velocities. In the new estimator an additional similarity investigation for each evaluated velocity and the available velocity estimates in a temporal (between frames) and spatial (within frames) neighborhood is performed. An a priori probability density term in the distribution...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...

  10. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  11. Ultrasonic correlator versus signal averager as a signal to noise enhancement instrument

    Science.gov (United States)

    Kishoni, Doron; Pietsch, Benjamin E.

    1989-01-01

    Ultrasonic inspection of thick and attenuating materials is hampered by the reduced amplitudes of the propagated waves to a degree that the noise is too high to enable meaningful interpretation of the data. In order to overcome the low Signal to Noise (S/N) ratio, a correlation technique has been developed. In this method, a continuous pseudo-random pattern generated digitally is transmitted and detected by piezoelectric transducers. A correlation is performed in the instrument between the received signal and a variable delayed image of the transmitted one. The result is shown to be proportional to the impulse response of the investigated material, analogous to a signal received from a pulsed system, with an improved S/N ratio. The degree of S/N enhancement depends on the sweep rate. This paper describes the correlator, and compares it to the method of enhancing S/N ratio by averaging the signals. The similarities and differences between the two are highlighted and the potential advantage of the correlator system is explained.

  12. A Correlation Between the Intrinsic Brightness and Average Decay Rate of Gamma-Ray Burst X-Ray Afterglow Light Curves

    Science.gov (United States)

    Racusin, J. L.; Oates, S. R.; De Pasquale, M.; Kocevski, D.

    2016-01-01

    We present a correlation between the average temporal decay (alpha X,avg, greater than 200 s) and early-time luminosity (LX,200 s) of X-ray afterglows of gamma-ray bursts as observed by the Swift X-ray Telescope. Both quantities are measured relative to a rest-frame time of 200 s after the gamma-ray trigger. The luminosity â€" average decay correlation does not depend on specific temporal behavior and contains one scale-independent quantity minimizing the role of selection effects. This is a complementary correlation to that discovered by Oates et al. in the optical light curves observed by the Swift Ultraviolet Optical Telescope. The correlation indicates that, on average, more luminous X-ray afterglows decay faster than less luminous ones, indicating some relative mechanism for energy dissipation. The X-ray and optical correlations are entirely consistent once corrections are applied and contamination is removed. We explore the possible biases introduced by different light-curve morphologies and observational selection effects, and how either geometrical effects or intrinsic properties of the central engine and jet could explain the observed correlation.

  13. Regional correlations of VS30 averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, David M.; Thompson, Eric M.; Cadet, Héloïse

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (VS30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (VSz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that VSz is systematically larger for a given VSz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating VS30 to VSz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate VS30 from VSz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in logVS30 of ±1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to VS30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that VS30 is correlated with VSz for z as great as 400 m for sites of the KiK-net network, providing some justification for using VS30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  14. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  15. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  16. General theory for calculating disorder-averaged Green's function correlators within the coherent potential approximation

    Science.gov (United States)

    Zhou, Chenyi; Guo, Hong

    2017-01-01

    We report a diagrammatic method to solve the general problem of calculating configurationally averaged Green's function correlators that appear in quantum transport theory for nanostructures containing disorder. The theory treats both equilibrium and nonequilibrium quantum statistics on an equal footing. Since random impurity scattering is a problem that cannot be solved exactly in a perturbative approach, we combine our diagrammatic method with the coherent potential approximation (CPA) so that a reliable closed-form solution can be obtained. Our theory not only ensures the internal consistency of the diagrams derived at different levels of the correlators but also satisfies a set of Ward-like identities that corroborate the conserving consistency of transport calculations within the formalism. The theory is applied to calculate the quantum transport properties such as average ac conductance and transmission moments of a disordered tight-binding model, and results are numerically verified to high precision by comparing to the exact solutions obtained from enumerating all possible disorder configurations. Our formalism can be employed to predict transport properties of a wide variety of physical systems where disorder scattering is important.

  17. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  18. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  19. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  20. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  1. Correlation between Grade Point Averages and Student Evaluation of Teaching Scores: Taking a Closer Look

    Science.gov (United States)

    Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne

    2014-01-01

    One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…

  2. Correlation between the Physical Activity Level and Grade Point Averages of Faculty of Education Students

    Science.gov (United States)

    Imdat, Yarim

    2014-01-01

    The aim of the study is to find the correlation that exists between physical activity level and grade point averages of faculty of education students. The subjects consist of 359 (172 females and 187 males) under graduate students To determine the physical activity levels of the students in this research, International Physical Activity…

  3. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  4. The Maximum Cross-Correlation approach to detecting translational motions from sequential remote-sensing images

    Science.gov (United States)

    Gao, J.; Lythe, M. B.

    1996-06-01

    This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.

  5. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  6. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    Science.gov (United States)

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  7. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    International Nuclear Information System (INIS)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R.

    2013-01-01

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  8. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2013-03-10

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  9. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  10. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  11. Step Test: a method for evaluating maximum oxygen consumption to determine the ability kind of work among students of medical emergencies.

    Science.gov (United States)

    Heydari, Payam; Varmazyar, Sakineh; Nikpey, Ahmad; Variani, Ali Safari; Jafarvand, Mojtaba

    2017-03-01

    Maximum oxygen consumption shows the maximum oxygen rate of muscle oxygenation that is acceptable in many cases, to measure the fitness between person and the desired job. Given that medical emergencies are important, and difficult jobs in emergency situations require people with high physical ability and readiness for the job, the aim of this study was to evaluate the maximum oxygen consumption, to determine the ability of work type among students of medical emergencies in Qazvin in 2016. This study was a descriptive - analytical, and in cross-sectional type conducted among 36 volunteer students of medical emergencies in Qazvin in 2016. After necessary coordination for the implementation of the study, participants completed health questionnaires and demographic characteristics and then the participants were evaluated with step tests of American College of Sport Medicine (ACSM). Data analysis was done by SPSS version 18 and U-Mann-Whitney tests, Kruskal-Wallis and Pearson correlation coefficient. Average of maximum oxygen consumption of the participants was estimated 3.15±0.50 liters per minute. 91.7% of medical emergencies students were selected as appropriate in terms of maximum oxygen consumption and thus had the ability to do heavy and too heavy work. Average of maximum oxygen consumption evaluated by the U-Mann-Whitney test and Kruskal-Wallis, had significant relationship with age (p<0.05) and weight groups (p<0.001). There was a significant positive correlation between maximum oxygen consumption with weight and body mass index (p<0.001). The results of this study showed that demographic variables of weight and body mass index are the factors influencing the determination of maximum oxygen consumption, as most of the students had the ability to do heavy, and too heavy work. Therefore, people with ability to do average work are not suitable for medical emergency tasks.

  12. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  13. Stone comminution correlates with the average peak pressure incident on a stone during shock wave lithotripsy.

    Science.gov (United States)

    Smith, N; Zhong, P

    2012-10-11

    To investigate the roles of lithotripter shock wave (LSW) parameters and cavitation in stone comminution, a series of in vitro fragmentation experiments have been conducted in water and 1,3-butanediol (a cavitation-suppressive fluid) at a variety of acoustic field positions of an electromagnetic shock wave lithotripter. Using field mapping data and integrated parameters averaged over a circular stone holder area (R(h)=7 mm), close logarithmic correlations between the average peak pressure (P(+(avg))) incident on the stone (D=10 mm BegoStone) and comminution efficiency after 500 and 1000 shocks have been identified. Moreover, the correlations have demonstrated distinctive thresholds in P(+(avg)) (5.3 MPa and 7.6 MPa for soft and hard stones, respectively), that are required to initiate stone fragmentation independent of surrounding fluid medium and LSW dose. These observations, should they be confirmed using other shock wave lithotripters, may provide an important field parameter (i.e., P(+(avg))) to guide appropriate application of SWL in clinics, and facilitate device comparison and design improvements in future lithotripters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Regional correlations of V s30 and velocities averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, D.M.; Thompson, E.M.; Cadet, H.

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (V S30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (V Sz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that V S30 is systematically larger for a given V Sz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating V S30 to V Sz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate V S30 from V Sz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in log V S30 of 1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to V S30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that V S30 is correlated with V Sz for z as great as 400 m for sites of the KiK-net network, providing some justification for using V S30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  15. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  16. Analysis of the correlation between γ-ray and radio emissions from γ-ray loud Blazar using the discrete correlation function

    International Nuclear Information System (INIS)

    Cheng Yong; Zhang Xiong; Wu Lin; Mao Weiming; You Lisha

    2006-01-01

    The authors collect 119 γ-ray-loud Blazar (97 flat spectrum radio quasars (FSRQs) and 22 BL Lacertae objects (BL Lac)), and investigate respectively the correlation between the γ-ray emission (maximum, minimum, and average data) at 1 GeV and the radio emission at 8.4 GHz by discrete correlation function (DCF) method. Our main results are as follows: there is good correlation between the γ-ray in high state and average state and radio emissions for the whole 119 Blazar and 97 FSRQs. And there are no correlation between γ-ray emission and radio emission in low state. Our result shows that the γ-rays are associated with the radio emission from the jet, and that the γ-ray emission is likely to have come from the synchrotron self-compton model (SSC) process in this case. (authors)

  17. An overview of the report: Correlation between carcinogenic potency and the maximum tolerated dose: Implications for risk assessment

    International Nuclear Information System (INIS)

    Krewski, D.; Gaylor, D.W.; Soms, A.P.; Szyszkowicz, M.

    1993-01-01

    Current practice in carcinogen bioassay calls for exposure of experimental animals at doses up to and including the maximum tolerated dose (MTD). Such studies have been used to compute measures of carcinogenic potency such as the TD 50 as well as unit risk factors such as q 1 for predicting low-dose risks. Recent studies have indicated that these measures of carcinogenic potency are highly correlated with the MTD. Carcinogenic potency has also been shown to be correlated with indicators of mutagenicity and toxicity. Correlation of the MTDs for rats and mice implies a corresponding correlation in TD 50 values for these two species. The implications of these results for cancer risk assessment are examined in light of the large variation in potency among chemicals known to induce tumors in rodents. 119 refs., 2 figs., 4 tabs

  18. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  19. Characterization of the spatial structure of local functional connectivity using multi-distance average correlation measures.

    Science.gov (United States)

    Macia, Didac; Pujol, Jesus; Blanco-Hinojo, Laura; Martínez-Vilavella, Gerard; Martín-Santos, Rocío; Deus, Joan

    2018-04-24

    There is ample evidence from basic research in neuroscience of the importance of local cortico-cortical networks. Millimetric resolution is achievable with current functional MRI (fMRI) scanners and sequences, and consequently a number of "local" activity similarity measures have been defined to describe patterns of segregation and integration at this spatial scale. We have introduced the use of Iso-Distant local Average Correlation (IDAC), easily defined as the average fMRI temporal correlation of a given voxel with other voxels placed at increasingly separated iso-distant intervals, to characterize the curve of local fMRI signal similarities. IDAC curves can be statistically compared using parametric multivariate statistics. Furthermore, by using RGB color-coding to display jointly IDAC values belonging to three different distance lags, IDAC curves can also be displayed as multi-distance IDAC maps. We applied IDAC analysis to a sample of 41 subjects scanned under two different conditions, a resting state and an auditory-visual continuous stimulation. Multi-distance IDAC mapping was able to discriminate between gross anatomo-functional cortical areas and, moreover, was sensitive to modulation between the two brain conditions in areas known to activate and de-activate during audio-visual tasks. Unlike previous fMRI local similarity measures already in use, our approach draws special attention to the continuous smooth pattern of local functional connectivity.

  20. Correlation between maximum voluntary contraction and endurance measured by digital palpation and manometry: An observational study

    Directory of Open Access Journals (Sweden)

    Fátima Faní Fitz

    Full Text Available Summary Introduction: Digital palpation and manometry are methods that can provide information regarding maximum voluntary contraction (MVC and endurance of the pelvic floor muscles (PFM, and a strong correlation between these variables can be expected. Objective: To investigate the correlation between MVC and endurance, measured by digital palpation and manometry. Method: Forty-two women, with mean age of 58.1 years (±10.2, and predominant symptoms of stress urinary incontinence (SUI, were included. Examination was firstly conducted by digital palpation and subsequently using a Peritron manometer. MVC was measured using a 0-5 score, based on the Oxford Grading Scale. Endurance was assessed based on the PERFECT scheme. Results: We found a significant positive correlation between the MVC measured by digital palpation and the peak manometric pressure (r=0.579, p<0.001, and between the measurements of the endurance by Peritron manometer and the PERFECT assessment scheme (r=0.559, P<0.001. Conclusion: Our results revealed a positive and significant correlation between the capacity and maintenance of PFM contraction using digital and manometer evaluations in women with predominant symptoms of SUI.

  1. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  2. Phase correlation and clustering of a nearest neighbour coupled oscillators system

    CERN Document Server

    Ei-Nashar, H F

    2002-01-01

    We investigated the phases in a system of nearest neighbour coupled oscillators before complete synchronization in frequency occurs. We found that when oscillators under the influence of coupling form a cluster of the same time-average frequency, their phases start to correlate. An order parameter, which measures this correlation, starts to grow at this stage until it reaches maximum. This means that a time-average phase locked state is reached between the oscillators inside the cluster of the same time- average frequency. At this strength the cluster attracts individual oscillators or a cluster to join in. We also observe that clustering in averaged frequencies orders the phases of the oscillators. This behavior is found at all the transition points studied.

  3. Phase correlation and clustering of a nearest neighbour coupled oscillators system

    International Nuclear Information System (INIS)

    EI-Nashar, Hassan F.

    2002-09-01

    We investigated the phases in a system of nearest neighbour coupled oscillators before complete synchronization in frequency occurs. We found that when oscillators under the influence of coupling form a cluster of the same time-average frequency, their phases start to correlate. An order parameter, which measures this correlation, starts to grow at this stage until it reaches maximum. This means that a time-average phase locked state is reached between the oscillators inside the cluster of the same time- average frequency. At this strength the cluster attracts individual oscillators or a cluster to join in. We also observe that clustering in averaged frequencies orders the phases of the oscillators. This behavior is found at all the transition points studied. (author)

  4. Evaluating the maximum patient radiation dose in cardiac interventional procedures

    International Nuclear Information System (INIS)

    Kato, M.; Chida, K.; Sato, T.; Oosaka, H.; Tosa, T.; Kadowaki, K.

    2011-01-01

    Many of the X-ray systems that are used for cardiac interventional radiology provide no way to evaluate the patient maximum skin dose (MSD). The authors report a new method for evaluating the MSD by using the cumulative patient entrance skin dose (ESD), which includes a back-scatter factor and the number of cine-angiography frames during percutaneous coronary intervention (PCI). Four hundred consecutive PCI patients (315 men and 85 women) were studied. The correlation between the cumulative ESD and number of cine-angiography frames was investigated. The irradiation and overlapping fields were verified using dose-mapping software. A good correlation was found between the cumulative ESD and the number of cine-angiography frames. The MSD could be estimated using the proportion of cine-angiography frames used for the main angle of view relative to the total number of cine-angiography frames and multiplying this by the cumulative ESD. The average MSD (3.0±1.9 Gy) was lower than the average cumulative ESD (4.6±2.6 Gy). This method is an easy way to estimate the MSD during PCI. (authors)

  5. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  6. Relative azimuth inversion by way of damped maximum correlation estimates

    Science.gov (United States)

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  7. Determination of maximum physiologic thyroid uptake and correlation with 24-hour RAI uptake value

    International Nuclear Information System (INIS)

    Duldulao, M.; Obaldo, J.

    2007-01-01

    Full text: In hyperthyroid patients, thyroid uptake values are overestimated, sometimes approaching or exceeding 100%. This is physiologically and mathematically impossible. This study was undertaken to determine the maximum physiologic thyroid uptake value through a proposed simple method using a gamma camera. Methodology: Twenty-two patients (17 females and 5 males), with ages ranging from 19-61 y/o (mean age ± SD; 41 ± 12), with 24-hour uptake value of >50%, clinically hyperthyroid and referred for subsequent radioactive iodine therapy were studied. The computed maximum physiologic thyroid uptake was compared with the 24-hour uptake using the paired Student t-test and evaluated using linear regression analysis. Results: The computed physiologic uptake correlated poorly with the 24-hour uptake value. However, in the male subgroup, there was no statistically significant difference between the two (p=0.77). Linear regression analysis gives the following relationship: physiologic uptake (%) = 77.76 - 0.284 (24-hour RAI uptake value). Conclusion: Provided that proper regions of interest are applied with correct attenuation and background subtraction, determination of physiologic thyroid uptake may be obtained using the proposed method. This simple method may be useful prior to I-131 therapy for hyperthyroidism especially when a single uptake determination is performed. (author)

  8. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  9. MAXIMUM CORONAL MASS EJECTION SPEED AS AN INDICATOR OF SOLAR AND GEOMAGNETIC ACTIVITIES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Gopalswamy, N.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    We investigate the relationship between the monthly averaged maximal speeds of coronal mass ejections (CMEs), international sunspot number (ISSN), and the geomagnetic Dst and Ap indices covering the 1996-2008 time interval (solar cycle 23). Our new findings are as follows. (1) There is a noteworthy relationship between monthly averaged maximum CME speeds and sunspot numbers, Ap and Dst indices. Various peculiarities in the monthly Dst index are correlated better with the fine structures in the CME speed profile than that in the ISSN data. (2) Unlike the sunspot numbers, the CME speed index does not exhibit a double peak maximum. Instead, the CME speed profile peaks during the declining phase of solar cycle 23. Similar to the Ap index, both CME speed and the Dst indices lag behind the sunspot numbers by several months. (3) The CME number shows a double peak similar to that seen in the sunspot numbers. The CME occurrence rate remained very high even near the minimum of the solar cycle 23, when both the sunspot number and the CME average maximum speed were reaching their minimum values. (4) A well-defined peak of the Ap index between 2002 May and 2004 August was co-temporal with the excess of the mid-latitude coronal holes during solar cycle 23. The above findings suggest that the CME speed index may be a useful indicator of both solar and geomagnetic activities. It may have advantages over the sunspot numbers, because it better reflects the intensity of Earth-directed solar eruptions.

  10. Increasing the maximum daily operation time of MNSR reactor by modifying its cooling system

    International Nuclear Information System (INIS)

    Khamis, I.; Hainoun, A.; Al Halbi, W.; Al Isa, S.

    2006-08-01

    thermal-hydraulic natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The model considers detailed description of the thermal and hydraulic aspects of cooling in the core and vessel. In addition, determination of pressure drop was made through an elaborate balancing of the overall pressure drop in the core against the sum of all individual channel pressure drops employing an iterative scheme. Using this model, an accurate estimation of various timely core-averaged hydraulic parameters such as generated power, hydraulic diameters, flow cross area, ... etc. for each one of the ten-fuel circles in the core can be made. Furthermore, distribution of coolant and fuel temperatures, including maximum fuel temperature and its location in the core, can now be determined. Correlation among core-coolant average temperature, reactor power, and core-coolant inlet temperature, during both steady and transient cases, have been established and verified against experimental data. Simulating various operating condition of MNSR, good agreement is obtained for at different power levels. Various schemes of cooling have been investigated for the purpose of assessing potential benefits on the operational characteristics of the syrian MNSR reactor. A detailed thermal hydraulic model for the analysis of MNSR has been developed. The analysis shows that an auxiliary cooling system, for the reactor vessel or installed in the pool which surrounds the lower section of the reactor vessel, will significantly offset the consumption of excess reactivity due to the negative reactivity temperature coefficient. Hence, the maximum operating time of the reactor is extended. The model considers detailed description of the thermal and hydraulic aspects of cooling the core and its surrounding vessel. Natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The suggested 'micro model

  11. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  12. A comparison of the Angstrom-type correlations and the estimation of monthly average daily global irradiation

    International Nuclear Information System (INIS)

    Jain, S.; Jain, P.C.

    1985-12-01

    Linear regression analysis of the monthly average daily global irradiation and the sunshine duration data of 8 Zambian locations has been performed using the least square technique. Good correlation (r>0.95) is obtained in all the cases showing that the Angstrom equation is valid for Zambian locations. The values of the correlation parameters thus obtained show substantial unsystematic scatter. The analysis was repeated after incorporating the effects of (i) multiple reflections of radiation between the ground and the atmosphere, and (ii) not burning of the sunshine recorder chart, into the Angstrom equation. The surface albedo measurements at Lusaka were used. The scatter in the correlation parameters was investigated by graphical representation, by regression analysis of the data of the individual stations as well as the combined data of the 8 stations. The results show that the incorporation of none of the two effects reduces the scatter significantly. A single linear equation obtained from the regression analysis of the combined data of the 8 stations is found to be appropriate for estimating the global irradiation over Zambian locations with reasonable accuracy from the sunshine duration data. (author)

  13. Correlation of diffusion and perfusion MRI with Ki-67 in high-grade meningiomas.

    Science.gov (United States)

    Ginat, Daniel T; Mangla, Rajiv; Yeaney, Gabrielle; Wang, Henry Z

    2010-12-01

    Atypical and anaplastic meningiomas have a greater likelihood of recurrence than benign meningiomas. The risk for recurrence is often estimated using the Ki-67 labeling index. The purpose of this study was to determine the correlation between Ki-67 and regional cerebral blood volume (rCBV) and between Ki-67 and apparent diffusion coefficient (ADC) in atypical and anaplastic meningiomas. A retrospective review of the advanced imaging and immunohistochemical characteristics of atypical and anaplastic meningiomas was performed. The relative minimum ADC, relative maximum rCBV, and specimen Ki-67 index were measured. Pearson's correlation was used to compare these parameters. There were 23 cases with available ADC maps and 20 cases with available rCBV maps. The average Ki-67 among the cases with ADC maps and rCBV maps was 17.6% (range, 5-38%) and 16.7% (range, 3-38%), respectively. The mean minimum ADC ratio was 0.91 (SD, 0.26) and the mean maximum rCBV ratio was 22.5 (SD, 7.9). There was a significant positive correlation between maximum rCBV and Ki-67 (Pearson's correlation, 0.69; p = 0.00038). However, there was no significant correlation between minimum ADC and Ki-67 (Pearson's correlation, -0.051; p = 0.70). Maximum rCBV correlated significantly with Ki-67 in high-grade meningiomas.

  14. 40 CFR 1045.140 - What is my engine's maximum engine power?

    Science.gov (United States)

    2010-07-01

    ...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...

  15. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  16. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  17. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and projected ranges of He+ ions in metals: V

    International Nuclear Information System (INIS)

    Kaminsky, M.; Das, S.K.; Fenske, G.

    1976-01-01

    In these experiments a systematic study of the correlation of the skin thickness measured directly by scanning electron microscopy with both the calculated projected-range values and the maximum in the damage-energy distribution has been conducted for a broad helium-ion energy range (100 keV-1000 keV in polycrystalline vanadium. (Auth.)

  18. Remote Sensing of Three-dimensional Winds with Elastic Lidar: Explanation of Maximum Cross-correlation Method

    Science.gov (United States)

    Buttler, William T.; Soriano, Cecilia; Baldasano, Jose M.; Nickel, George H.

    Maximum cross-correlation provides a method toremotely de-ter-mine high-lyre-solved three-dimensional fields of horizontalwinds with e-las-tic li-darthrough-out large volumes of the planetaryboundary layer (PBL). This paperdetails the technique and shows comparisonsbetween elastic lidar winds, remotelysensed laser Doppler velocimeter (LDV) windprofiles, and radiosonde winds.Radiosonde wind data were acquired at Barcelona,Spain, during the BarcelonaAir-Quality Initiative (1992), and the LDVwind data were acquired at SunlandPark, New Mexico during the 1994 Border AreaAir-Quality Study. Comparisonsshow good agreement between the differentinstruments, and demonstrate the methoduseful for air pollution management at thelocal/regional scale. Elastic lidar windscould thus offer insight into aerosol andpollution transport within the PBL. Lidarwind fields might also be used to nudge orimprove initialization and evaluation ofatmospheric meteorological models.

  19. Spatial correlation in precipitation trends in the Brazilian Amazon

    Science.gov (United States)

    Buarque, Diogo Costa; Clarke, Robin T.; Mendes, Carlos Andre Bulhoes

    2010-06-01

    A geostatistical analysis of variables derived from Amazon daily precipitation records (trends in annual precipitation totals, trends in annual maximum precipitation accumulated over 1-5 days, trend in length of dry spell, trend in number of wet days per year) gave results that are consistent with those previously reported. Averaged over the Brazilian Amazon region as a whole, trends in annual maximum precipitations were slightly negative, the trend in the length of dry spell was slightly positive, and the trend in the number of wet days in the year was slightly negative. For trends in annual maximum precipitation accumulated over 1-5 days, spatial correlation between trends was found to extend up to a distance equivalent to at least half a degree of latitude or longitude, with some evidence of anisotropic correlation. Time trends in annual precipitation were found to be spatially correlated up to at least ten degrees of separation, in both W-E and S-N directions. Anisotropic spatial correlation was strongly evident in time trends in length of dry spell with much stronger evidence of spatial correlation in the W-E direction, extending up to at least five degrees of separation, than in the S-N. Because the time trends analyzed are shown to be spatially correlated, it is argued that methods at present widely used to test the statistical significance of climate trends over time lead to erroneous conclusions if spatial correlation is ignored, because records from different sites are assumed to be statistically independent.

  20. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  1. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  2. Average correlation clustering algorithm (ACCA) for grouping of co-regulated genes with similar pattern of variation in their expression values.

    Science.gov (United States)

    Bhattacharya, Anindya; De, Rajat K

    2010-08-01

    Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to

  3. The spectrum of R Cygni during its exceptionally low maximum of 1983

    International Nuclear Information System (INIS)

    Wallerstein, G.; Dominy, J.F.; Mattei, J.A.; Smith, V.V.

    1985-01-01

    In 1983 R Cygni experienced its faintest maximum ever recorded. A study of the light curve shows correlations between brightness at maximum and interval from the previous cycle, in the sense that fainter maxima occur later than normal and are followed by maxima that occur earlier than normal. Emission and absorption lines in the optical and near infrared (2.2 μm region) reveal two significant correlations. The amplitude of line doubling is independent of the magnitude at maximum for msub(v)(max)=7.1 to 9.8. The velocities of the emission lines, however, correlate with the magnitude at maximum, in that during bright maxima they are negatively displaced by 15 km s -1 with respect to the red component of absorption lines, while during the faintest maximum there is no displacement. (author)

  4. Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery

    Science.gov (United States)

    Alam, Mohammad S.; Bhuiyan, Sharif M. A.

    2014-01-01

    In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840

  5. ANALYSIS OF THE STATISTICAL BEHAVIOUR OF DAILY MAXIMUM AND MONTHLY AVERAGE RAINFALL ALONG WITH RAINY DAYS VARIATION IN SYLHET, BANGLADESH

    Directory of Open Access Journals (Sweden)

    G. M. J. HASAN

    2014-10-01

    Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.

  6. The correlation between physical activity and grade point average for health science graduate students.

    Science.gov (United States)

    Gonzalez, Eugenia C; Hernandez, Erika C; Coltrane, Ambrosia K; Mancera, Jayme M

    2014-01-01

    Researchers have reported positive associations between physical activity and academic achievement. However, a common belief is that improving academic performance comes at the cost of reducing time for and resources spent on extracurricular activities that encourage physical activity. The purpose of this study was to examine the relationship between self-reported physical activity and grade point average (GPA) for health science graduate students. Graduate students in health science programs completed the International Physical Activity Questionnaire and reported their academic progress. Most participants (76%) reported moderate to vigorous physical activity levels that met or exceeded the recommended levels for adults. However, there was no significant correlation between GPA and level of physical activity. Negative findings for this study may be associated with the limited range of GPA scores for graduate students. Future studies need to consider more sensitive measures of cognitive function, as well as the impact of physical activity on occupational balance and health for graduate students in the health fields. Copyright 2014, SLACK Incorporated.

  7. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  8. Predicting the start and maximum amplitude of solar cycle 24 using similar phases and a cycle grouping

    International Nuclear Information System (INIS)

    Wang Jialong; Zong Weiguo; Le Guiming; Zhao Haijuan; Tang Yunqiu; Zhang Yang

    2009-01-01

    We find that the solar cycles 9, 11, and 20 are similar to cycle 23 in their respective descending phases. Using this similarity and the observed data of smoothed monthly mean sunspot numbers (SMSNs) available for the descending phase of cycle 23, we make a date calibration for the average time sequence made of the three descending phases of the three cycles, and predict the start of March or April 2008 for cycle 24. For the three cycles, we also find a linear correlation of the length of the descending phase of a cycle with the difference between the maximum epoch of this cycle and that of its next cycle. Using this relationship along with the known relationship between the rise-time and the maximum amplitude of a slowly rising solar cycle, we predict the maximum SMSN of cycle 24 of 100.2 ± 7.5 to appear during the period from May to October 2012. (letters)

  9. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  10. Self-averaging correlation functions in the mean field theory of spin glasses

    International Nuclear Information System (INIS)

    Mezard, M.; Parisi, G.

    1984-01-01

    In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it

  11. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  12. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  13. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  14. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  15. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  16. Detrending moving-average cross-correlation coefficient: Measuring cross-correlations between non-stationary series

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    Roč. 406 , č. 1 (2014), s. 169-175 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Grant - others:GA ČR(CZ) GAP402/11/0948 Program:GA Institutional support: RVO:67985556 Keywords : correlations * econophysics * non-stationarity Subject RIV: AH - Economics Impact factor: 1.732, year: 2014 http://library.utia.cas.cz/separaty/2014/E/kristoufek-0433529.pdf

  17. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  18. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  19. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  20. On correlations between certain random variables associated with first passage Brownian motion

    International Nuclear Information System (INIS)

    Kearney, Michael J; Pye, Andrew J; Martin, Richard J

    2014-01-01

    We analyse how the area swept out by a Brownian motion up to its first passage time correlates with the first passage time itself, obtaining several exact results in the process. Additionally, we analyse the relationship between the time average of a Brownian motion during a first passage and the maximum value attained. The results, which find various applications, are in excellent agreement with simulations. (paper)

  1. Disentangling multi-level systems: averaging, correlations and memory

    International Nuclear Information System (INIS)

    Wouters, Jeroen; Lucarini, Valerio

    2012-01-01

    We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems

  2. Waveform correlation and coherence of short-period seismic noise within Gauribidanur array with implications for event detection

    International Nuclear Information System (INIS)

    Bhadauria, Y.S.; Arora, S.K.

    1995-01-01

    In continuation with our effort to model the short-period micro seismic noise at the seismic array at Gauribidanur (GBA), we have examined in detail time-correlation and spectral coherence of the noise field within the array space. This has implications of maximum possible improvement in signal-to-noise ratio (SNR) relevant to event detection. The basis of this study is about a hundred representative wide-band noise samples collected from GBA throughout the year 1992. Both time-structured correlation as well as coherence of the noise waveforms are found to be practically independent of the inter element distances within the array, and they exhibit strong temporal and spectral stability. It turns out that the noise is largely incoherent at frequencies ranging upwards from 2 Hz; the coherency coefficient tends to increase in the lower frequency range attaining a maximum of 0.6 close to 0.5 Hz. While the maximum absolute cross-correlation also diminishes with increasing frequency, the zero-lag cross-correlation is found to be insensitive to frequency filtering regardless of the pass band. An extremely small value of -0.01 of the zero-lag correlation and a comparatively higher year-round average estimate at 0.15 of the maximum absolute time-lagged correlation yields an SNR improvement varying between a probable high of 4.1 and a low of 2.3 for the full 20-element array. 19 refs., 6 figs

  3. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  4. Verification of average daily maximum permissible concentration of styrene in the atmospheric air of settlements under the results of epidemiological studies of the children’s population

    Directory of Open Access Journals (Sweden)

    М.А. Zemlyanova

    2015-03-01

    Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.

  5. A METHOD FOR DETERMINING THE RADIALLY-AVERAGED EFFECTIVE IMPACT AREA FOR AN AIRCRAFT CRASH INTO A STRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Walker, William C. [ORNL

    2018-02-01

    This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, the resulting calculated probability of an aircraft crash into a structure is reduced.

  6. [In patients with Graves' disease signal-averaged P wave duration positively correlates with the degree of thyrotoxicosis].

    Science.gov (United States)

    Czarkowski, Marek; Oreziak, Artur; Radomski, Dariusz

    2006-04-01

    Coexistence of the goitre, proptosis and palpitations was observed in XIX century for the first time. Sinus tachyarytmias and atrial fibrillation are typical cardiac symptoms of hyperthyroidism. Atrial fibrillation occurs more often in patients with toxic goiter than in young patients with Grave's disease. These findings suggest that causes of atrial fibrillation might be multifactorial in the elderly. The aims of our study were to evaluate correlations between the parameters of atrial signal averaged ECG (SAECG) and the serum concentration of thyroid free hormones. 25 patient with untreated Grave's disease (G-B) (age 29,6 +/- 9,0 y.o.) and 26 control patients (age 29,3 +/- 6,9 y.o.) were enrolled to our study. None of them had history of atrial fibrillation what was confirmed by 24-hour ECG Holter monitoring. The serum fT3, fT4, TSH were determined in the venous blood by the immunoenzymatic method. Atrial SAECG recording with filtration by zero phase Butterworth filter (45-150 Hz) was done in all subjects. The duration of atrial vector magnitude (hfP) and root meat square of terminal 20ms of atrial vector magnitude (RMS20) were analysed. There were no significant differences in values of SAECG parameters (hfP, RMS20) between investigated groups. The positive correlation between hfP and serum fT3 concentration in group G-B was observed (Spearman's correlation coefficient R = 0.462, p Grave's disease depends not only on hyperthyroidism but on serum concentration of fT3 also.

  7. Correlation between impurities, defects and cell performance in semicrystalline silicon

    International Nuclear Information System (INIS)

    Doolittle, W.A.; Rohatgi, A.

    1990-01-01

    This paper reports that an in-depth analysis of Solarex CDS semicrystalline silicon has been performed and correlations between the efficiency and impurities, and defects present in the material have been made. Comparisons were made between cell performance and variations in interstitial oxygen, substitutional carbon, grain size, etch pit density, and trap location as a function of position in the ingot. The oxygen concentration was found to decrease with increasing distance from the bottom of the ingot while the carbon concentration as well as average grain size was found to increase. The best cell performance was obtained on wafers with minimum oxygen and maximum carbon (top). No correlation was found between etch pit density and cell performance. DLTS and JVT measurements revealed that samples with higher oxygen content (bottom) gave lower cell performance due to a large number of distributed states, possibly due to extended defects like oxygen precipitates. Low oxygen samples (top) showed predominately discrete states, improved cell performance and a doping dependent average trap density

  8. The spleen-liver uptake ratio in liver scan: review of its measurement and correlation between hemodynamical changes of the liver in portal hypertension

    International Nuclear Information System (INIS)

    Lee, S. Y.; Chung, Y. A.; Chung, H. S.; Lee, H. G.; Kim, S. H.; Chung, S. K.

    1999-01-01

    We analyzed correlation between changes of the Spleen-Liver Ratio in liver scintigram and hemodynamical changes of the liver in overall grades of portal hypertension by non-invasive, scintigraphic method. And the methods for measurement of the Spleen-Liver Ratio were also reviewed. Hepatic scintiangiograms for 120 seconds with 250-333 MBq of 99mTc-Sn-phytate followed by liver scintigrams were performed in 62 patients group consisted with clinically proven norma and various diffuse hepatocellular diseases. Hepatic Perfusion indices were calculated from the Time-Activity Curves of hepatic scintiangiograms. Each Spleen-Liver Ratios of maximum, average and total counts within ROIs of the liver and spleen from both anterior and posterior liver scintigrams and their geometrical means were calculated. Linear correlations between each Spleen-Liver Ratios and Hepatic Perfusion indices were evaluated. There was strong correlation (y=0.0002x 2 -0.0049x+0.2746, R=0.8790, p<0.0001) between Hepatic Perfusion Indices and Spleen-Liver Ratios calculated from posterior maxium counts of the liver scintigrams. Weaker correlations with either geometrical means of maximum and average count methods (R=0.8101, 0.7268, p<0.0001) or average counts of both posterior and anterior veiws (R=0.8134, 0.6200, p<0.0001) were noted. We reconfirmed that changes of Spleen-Liver Ratio in liver scintigrams represent hemodynamical changes in portal hypertension of diffuse hepatocellular diseases. Among them, the posterior Spleen-Liver Ratio measured by maximum counts will give the best information. And matching with Hepatic Perfusion index will be another useful index to evaluate characteristics splenic extraction coefficient of a certain radiocolloid for liver scintigram

  9. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  10. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  11. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Science.gov (United States)

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  12. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Energy Technology Data Exchange (ETDEWEB)

    Borcherdt, R.D.; Gibbs, J.F.

    1975-01-01

    The intensity data for the California earthquake of Apr 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan formation is intensity = 2.69 - 1.90 log (distance) (km). For sites on other geologic units, intensity increments, derived with respect to this empirical relation, correlate strongly with the average horizontal spectral amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is intensity increment = 0.27 + 2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan formation, 0.64 for the Great Valley sequence, 0.82 for Santa Clara formation, 1.34 for alluvium, and 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hayward fault.

  13. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    Directory of Open Access Journals (Sweden)

    Jae-Won Choi

    2016-01-01

    Full Text Available This study obtained the latitude where tropical cyclones (TCs show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two periods, anomalous anticyclonic circulations were strong in 30°–50°N, while anomalous monsoon trough was located in the north of South China Sea. This anomalous monsoon trough was extended eastward to 145°E. Middle-latitude region in East Asia is affected by the anomalous southeasterlies due to these anomalous anticyclonic circulations and anomalous monsoon trough. These anomalous southeasterlies play a role of anomalous steering flows that make the TCs heading toward region in East Asia middle latitude. As a result, TCs during 1999–2013 had higher latitude of the maximum intensity compared to the TCs during 1977–1998.

  14. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  15. Trading Time with Space - Development of subduction zone parameter database for a maximum magnitude correlation assessment

    Science.gov (United States)

    Schaefer, Andreas; Wenzel, Friedemann

    2017-04-01

    Subduction zones are generally the sources of the earthquakes with the highest magnitudes. Not only in Japan or Chile, but also in Pakistan, the Solomon Islands or for the Lesser Antilles, subduction zones pose a significant hazard for the people. To understand the behavior of subduction zones, especially to identify their capabilities to produce maximum magnitude earthquakes, various physical models have been developed leading to a large number of various datasets, e.g. from geodesy, geomagnetics, structural geology, etc. There have been various studies to utilize this data for the compilation of a subduction zone parameters database, but mostly concentrating on only the major zones. Here, we compile the largest dataset of subduction zone parameters both in parameter diversity but also in the number of considered subduction zones. In total, more than 70 individual sources have been assessed and the aforementioned parametric data have been combined with seismological data and many more sources have been compiled leading to more than 60 individual parameters. Not all parameters have been resolved for each zone, since the data completeness depends on the data availability and quality for each source. In addition, the 3D down-dip geometry of a majority of the subduction zones has been resolved using historical earthquake hypocenter data and centroid moment tensors where available and additionally compared and verified with results from previous studies. With such a database, a statistical study has been undertaken to identify not only correlations between those parameters to estimate a parametric driven way to identify potentials for maximum possible magnitudes, but also to identify similarities between the sources themselves. This identification of similarities leads to a classification system for subduction zones. Here, it could be expected if two sources share enough common characteristics, other characteristics of interest may be similar as well. This concept

  16. Mapping Comparison and Meteorological Correlation Analysis of the Air Quality Index in Mid-Eastern China

    Directory of Open Access Journals (Sweden)

    Zhichen Yu

    2017-02-01

    Full Text Available With the continuous progress of human production and life, air quality has become the focus of attention. In this paper, Beijing, Tianjin, Hebei, Shanxi, Shandong and Henan provinces were taken as the study area, where there are 58 air quality monitoring stations from which daily and monthly data are obtained. Firstly, the temporal characteristics of the air quality index (AQI are explored. Then, the spatial distribution of the AQI is mapped by the inverse distance weighted (IDW method, the ordinary kriging (OK method and the Bayesian maximum entropy (BME method. Additionally, cross-validation is utilized to evaluate the mapping results of these methods with two indexes: mean absolute error and root mean square interpolation error. Furthermore, the correlation analysis of meteorological factors, including precipitation anomaly percentage, precipitation, mean wind speed, average temperature, average water vapor pressure and average relative humidity, potentially affecting the AQI was carried out on both daily and monthly scales. In the study area and period, AQI shows a clear periodicity, although overall, it has a downward trend. The peak of AQI appeared in November, December and January. BME interpolation has a higher accuracy than OK. IDW has the maximum error. Overall, the AQI of winter (November, spring (February is much worse than summer (May and autumn (August. Additionally, the air quality has improved during the study period. The most polluted areas of air quality are concentrated in Beijing, the southern part of Tianjin, the central-southern part of Hebei, the central-northern part of Henan and the western part of Shandong. The average wind speed and average relative humidity have real correlation with AQI. The effect of meteorological factors such as wind, precipitation and humidity on AQI is putative to have temporal lag to different extents. AQI of cities with poor air quality will fluctuate greater than that of others when weather

  17. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  18. Correlative investigation of dynamic contrast CT and positron emission tomography with 18-fluorodeoxy glucose standardized uptake value in non-small cell lung cancer

    International Nuclear Information System (INIS)

    Ding Qiyong; Hua Yanqing; Zhu Feng; Mao Dingbiao; Ge Xiaojun; Zhang Guozhen; Guan Yihui; Zhao Jun

    2005-01-01

    Objective: To explore the correlation of dynamic enhanced CT attenuation and 18-fluorodeoxy glucose ( 18 F-FDG) standardized uptake value (SUV) in non-small cell lung cancer (NSCLC). Methods: Twenty-eight NSCLC patients and 13 patients with benign nodules (28 male, 13 female; age range 15-79 years, median 57 years; the diameter range from 0.8-4.0 cm, mean 2.2 cm) were examined on Siemens biograph sensation 16 PET-CT with 18 F-FDG. Dynamic enhanced CT scan was performed on Siemens sensation 16 PET-CT or 16 slice CT in 23 patients and other 18 patients had the results of dynamic CT from other hospitals. The mean CT attenuation of ROI on precontrast and postcontrast multi-phase images, the maxium and average SUV of 18 F-FDG were respectively measured. The correlation between the peak attenuation (A PA ) and SUV was analyzed with pearson correlation coefficient test. Results: The CT A PA between NSCLC and benign nodules had no significance difference (t=1.374, P=0.189). The difference of maximum and average SUV between NSCLC and benignity were significant (t=-3.972, P PA , maximum SUV (7.23 ± 4.38), and average SUV (4.93±3.53) (r=-0.040, P=0.839 and r=0.056, P=0.778). Conclusion: There is no correlation between A PA and SUV in NSCLC. SUV is probably not suitable for the evaluation of the effects of anti-angiogenesis therapy. (authors)

  19. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    Science.gov (United States)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a

  20. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  1. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  2. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  3. The Hengill geothermal area, Iceland: Variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G. R.

    1995-04-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area S. Iceland, a dominantly basaltic area. The likely strain rate calculated from thermal and tectonic considerations is 10 -15 s -1, and temperature measurements from four drill sites within the area indicate average, near-surface geothermal gradients of up to 150 °C km -1 throughout the upper 2 km. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ± 50 °C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes located highly accurately by performing a simultaneous inversion for three-dimensional structure and hypocentral parameters. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. Beneath the high-temperature part of the geothermal area, the maximum depth of earthquakes may be as shallow as 4 km. The geothermal gradient below drilling depths in various parts of the area ranges from 84 ± 9 °Ckm -1 within the low-temperature geothermal area of the transform zone to 138 ± 15 °Ckm -1 below the centre of the high-temperature geothermal area. Shallow maximum depths of earthquakes and therefore high average geothermal gradients tend to correlate with the intensity of the geothermal area and not with the location of the currently active spreading axis.

  4. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  5. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  6. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  7. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  8. [Correlation analysis of major agronomic characters and the polysaccharide contents in Dendrobium officinale].

    Science.gov (United States)

    Zhang, Lei; Zheng, Xi-Long; Qiu, Dao-Shou; Cai, Shi-Ke; Luo, Huan-Ming; Deng, Rui-Yun; Liu, Xiao-Jin

    2013-10-01

    In order to provide theoretical and technological basis for the germplasm innovation and variety breeding in Dendrobium officinale, a study of the correlation between polysaccharide content and agronomic characters was conducted. Based on the polysaccharide content determination and the agronomic characters investigation of 30 copies (110 individual plants) of Dendrobium officinale germplasm resources, the correlation between polysaccharide content and agronomic characters was analyzed via path and correlation analysis. Correlation analysis results showed that there was a significant negative correlation between average spacing and polysaccharide content, the correlation coefficient was -0.695. And the blade thickness was positively correlated with the polysaccharide content, but the correlation was not significant. The path analysis results showed that the stem length was the maximum influence factor to the polysaccharide, and it was positive effect, the direct path coefficient was 1.568. According to thess results, the polysaccharide content can be easily and intuitively estimated by the agronomic characters investigating data in the germpalsm resources screening and variety breeding. Therefore, it is a visual and practical technology guidance in quality variety breeding of Dendrobium officinale.

  9. Averaging problem in general relativity, macroscopic gravity and using Einstein's equations in cosmology.

    Science.gov (United States)

    Zalaletdinov, R. M.

    1998-04-01

    The averaging problem in general relativity is briefly discussed. A new setting of the problem as that of macroscopic description of gravitation is proposed. A covariant space-time averaging procedure is described. The structure of the geometry of macroscopic space-time, which follows from averaging Cartan's structure equations, is described and the correlation tensors present in the theory are discussed. The macroscopic field equations (averaged Einstein's equations) derived in the framework of the approach are presented and their structure is analysed. The correspondence principle for macroscopic gravity is formulated and a definition of the stress-energy tensor for the macroscopic gravitational field is proposed. It is shown that the physical meaning of using Einstein's equations with a hydrodynamic stress-energy tensor in looking for cosmological models means neglecting all gravitational field correlations. The system of macroscopic gravity equations to be solved when the correlations are taken into consideration is given and described.

  10. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  11. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  12. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Directory of Open Access Journals (Sweden)

    Rodrigo Cofré

    2018-01-01

    Full Text Available The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.

  13. Relationship Between Selected Strength and Power Assessments to Peak and Average Velocity of the Drive Block in Offensive Line Play.

    Science.gov (United States)

    Jacobson, Bert H; Conchola, Eric C; Smith, Doug B; Akehi, Kazuma; Glass, Rob G

    2016-08-01

    Jacobson, BH, Conchola, EC, Smith, DB, Akehi, K, and Glass, RG. Relationship between selected strength and power assessments to peak and average velocity of the drive block in offensive line play. J Strength Cond Res 30(8): 2202-2205, 2016-Typical strength training for football includes the squat and power clean (PC) and routinely measured variables include 1 repetition maximum (1RM) squat and 1RM PC along with the vertical jump (VJ) for power. However, little research exists regarding the association between the strength exercises and velocity of an actual on-the-field performance. The purpose of this study was to investigate the relationship of peak velocity (PV) and average velocity (AV) of the offensive line drive block to 1RM squat, 1RM PC, the VJ, body mass (BM), and body composition. One repetition maximum assessments for the squat and PC were recorded along with VJ height, BM, and percent body fat. These data were correlated with PV and AV while performing the drive block. Peal velocity and AV were assessed using a Tendo Power and Speed Analyzer as the linemen fired, from a 3-point stance into a stationary blocking dummy. Pearson product analysis yielded significant (p ≤ 0.05) correlations between PV and AV and the VJ, the squat, and the PC. A significant inverse association was found for both PV and AV and body fat. These data help to confirm that the typical exercises recommended for American football linemen is positively associated with both PV and AV needed for the drive block effectiveness. It is recommended that these exercises remain the focus of a weight room protocol and that ancillary exercises be built around these exercises. Additionally, efforts to reduce body fat are recommended.

  14. Methodological aspects of crossover and maximum fat-oxidation rate point determination.

    Science.gov (United States)

    Michallet, A-S; Tonini, J; Regnier, J; Guinot, M; Favre-Juvin, A; Bricout, V; Halimi, S; Wuyam, B; Flore, P

    2008-11-01

    Indirect calorimetry during exercise provides two metabolic indices of substrate oxidation balance: the crossover point (COP) and maximum fat oxidation rate (LIPOXmax). We aimed to study the effects of the analytical device, protocol type and ventilatory response on variability of these indices, and the relationship with lactate and ventilation thresholds. After maximum exercise testing, 14 relatively fit subjects (aged 32+/-10 years; nine men, five women) performed three submaximum graded tests: one was based on a theoretical maximum power (tMAP) reference; and two were based on the true maximum aerobic power (MAP). Gas exchange was measured concomitantly using a Douglas bag (D) and an ergospirometer (E). All metabolic indices were interpretable only when obtained by the D reference method and MAP protocol. Bland and Altman analysis showed overestimation of both indices with E versus D. Despite no mean differences between COP and LIPOXmax whether tMAP or MAP was used, the individual data clearly showed disagreement between the two protocols. Ventilation explained 10-16% of the metabolic index variations. COP was correlated with ventilation (r=0.96, P<0.01) and the rate of increase in blood lactate (r=0.79, P<0.01), and LIPOXmax correlated with the ventilation threshold (r=0.95, P<0.01). This study shows that, in fit healthy subjects, the analytical device, reference used to build the protocol and ventilation responses affect metabolic indices. In this population, and particularly to obtain interpretable metabolic indices, we recommend a protocol based on the true MAP or one adapted to include the transition from fat to carbohydrate. The correlation between metabolic indices and lactate/ventilation thresholds suggests that shorter, classical maximum progressive exercise testing may be an alternative means of estimating these indices in relatively fit subjects. However, this needs to be confirmed in patients who have metabolic defects.

  15. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  16. Entanglement in random pure states: spectral density and average von Neumann entropy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)

    2011-11-04

    Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)

  17. Interference Cancellation Technique Based on Discovery of Spreading Codes of Interference Signals and Maximum Correlation Detection for DS-CDMA System

    Science.gov (United States)

    Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki

    This paper presents a novel interference cancellation (IC) scheme for both synchronous and asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless channels. In the DS-CDMA system, the multiple access interference (MAI) and the near-far problem (NFP) are the two factors which reduce the capacity of the system. In this paper, we propose a new algorithm that is able to detect all interference signals as an individual MAI signal by maximum correlation detection. It is based on the discovery of all the unknowing spreading codes of the interference signals. Then, all possible MAI patterns so called replicas are generated as a summation of interference signals. And the true MAI pattern is found by taking correlation between the received signal and the replicas. Moreover, the receiver executes MAI cancellation in a successive manner, removing all interference signals by single-stage. Numerical results will show that the proposed IC strategy, which alleviates the detrimental effect of the MAI and the near-far problem, can significantly improve the system performance. Especially, we can obtain almost the same receiving characteristics as in the absense of interference for asynchrnous system when received powers are equal. Also, the same performances can be seen under any received power state for synchronous system.

  18. Estimating genetic covariance functions assuming a parametric correlation structure for environmental effects

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2001-11-01

    Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.

  19. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  20. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    Science.gov (United States)

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  1. The concept of the average stress in the fracture process zone for the search of the crack path

    Directory of Open Access Journals (Sweden)

    Yu.G. Matvienko

    2015-10-01

    Full Text Available The concept of the average stress has been employed to propose the maximum average tangential stress (MATS criterion for predicting the direction of fracture angle. This criterion states that a crack grows when the maximum average tangential stress in the fracture process zone ahead of the crack tip reaches its critical value and the crack growth direction coincides with the direction of the maximum average tangential stress along a constant radius around the crack tip. The tangential stress is described by the singular and nonsingular (T-stress terms in the Williams series solution. To demonstrate the validity of the proposed MATS criterion, this criterion is directly applied to experiments reported in the literature for the mixed mode I/II crack growth behavior of Guiting limestone. The predicted directions of fracture angle are consistent with the experimental data. The concept of the average stress has been also employed to predict the surface crack path under rolling-sliding contact loading. The proposed model considers the size and orientation of the initial crack, normal and tangential loading due to rolling–sliding contact as well as the influence of fluid trapped inside the crack by a hydraulic pressure mechanism. The MATS criterion is directly applied to equivalent contact model for surface crack growth on a gear tooth flank.

  2. Measured emotional intelligence ability and grade point average in nursing students.

    Science.gov (United States)

    Codier, Estelle; Odell, Ellen

    2014-04-01

    For most schools of nursing, grade point average is the most important criteria for admission to nursing school and constitutes the main indicator of success throughout the nursing program. In the general research literature, the relationship between traditional measures of academic success, such as grade point average and postgraduation job performance is not well established. In both the general population and among practicing nurses, measured emotional intelligence ability correlates with both performance and other important professional indicators postgraduation. Little research exists comparing traditional measures of intelligence with measured emotional intelligence prior to graduation, and none in the student nurse population. This exploratory, descriptive, quantitative study was undertaken to explore the relationship between measured emotional intelligence ability and grade point average of first year nursing students. The study took place at a school of nursing at a university in the south central region of the United States. Participants included 72 undergraduate student nurse volunteers. Emotional intelligence was measured using the Mayer-Salovey-Caruso Emotional Intelligence Test, version 2, an instrument for quantifying emotional intelligence ability. Pre-admission grade point average was reported by the school records department. Total emotional intelligence (r=.24) scores and one subscore, experiential emotional intelligence(r=.25) correlated significantly (>.05) with grade point average. This exploratory, descriptive study provided evidence for some relationship between GPA and measured emotional intelligence ability, but also demonstrated lower than average range scores in several emotional intelligence scores. The relationship between pre-graduation measures of success and level of performance postgraduation deserves further exploration. The findings of this study suggest that research on the relationship between traditional and nontraditional

  3. Reliability of one-repetition maximum performance in people with chronic heart failure.

    Science.gov (United States)

    Ellis, Rachel; Holland, Anne E; Dodd, Karen; Shields, Nora

    2018-02-24

    Evaluate intra-rater and inter-rater reliability of the one-repetition maximum strength test in people with chronic heart failure. Intra-rater and inter-rater reliability study. A public tertiary hospital in northern metropolitan Melbourne. Twenty-four participants (nine female, mean age 71.8 ± 13.1 years) with mild to moderate heart failure of any aetiology. Lower limb strength was assessed by determining the maximum weight that could be lifted using a leg press. Intra-rater reliability was tested by one assessor on two separate occasions . Inter-rater reliability was tested by two assessors in random order. Intra-class correlation coefficients and 95% confidence intervals were calculated. Bland and Altman analyses were also conducted, including calculation of mean differences between measures ([Formula: see text]) and limits of agreement . Ten intra-rater and 21 inter-rater assessments were completed. Excellent intra-rater (intra-class correlation coefficient 2,1 0.96) and inter-rater (intra-class correlation coefficient 2,1 0.93) reliability was found. Intra-rater assessment showed less variability (mean difference 4.5 kg, limits of agreement -8.11 to 17.11 kg) than inter-rater agreement (mean difference -3.81 kg, limits of agreement -23.39 to 15.77 kg). One-repetition maximum determined using a leg press is a reliable measure in people with heart failure. Given its smaller limits of agreement, intra-rater testing is recommended. Implications for Rehabilitation Using a leg press to determine a one-repetition maximum we were able to demonstrate excellent inter-rater and intra-rater reliability using an intra-class correlation coefficient. The Bland and Altman levels of agreement were wide for inter-rater reliability and so we recommend using one assessor if measuring change in strength within an individual over time.

  4. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    Energy Technology Data Exchange (ETDEWEB)

    Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.

  5. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    International Nuclear Information System (INIS)

    Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error

  6. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  7. An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation

    Directory of Open Access Journals (Sweden)

    Hao Zhang

    2015-01-01

    Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.

  8. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  9. Linear intra-bone geometry dependencies of the radius: Radius length determination by maximum distal width

    International Nuclear Information System (INIS)

    Baumbach, S.F.; Krusche-Mandl, I.; Huf, W.; Mall, G.; Fialka, C.

    2012-01-01

    Purpose: The aim of the study was to investigate possible linear intra-bone geometry dependencies by determining the relation between the maximum radius length and maximum distal width in two independent populations and test for possible gender or age effects. A strong correlation can help develop more representative fracture models and osteosynthetic devices as well as aid gender and height estimation in anthropologic/forensic cases. Methods: First, maximum radius length and distal width of 100 consecutive patients, aged 20–70 years, were digitally measured on standard lower arm radiographs by two independent investigators. Second, the same measurements were performed ex vivo on a second cohort, 135 isolated, formalin fixed radii. Standard descriptive statistics as well as correlations were calculated and possible gender age influences tested for both populations separately. Results: The radiographic dataset resulted in a correlation of radius length and width of r = 0.753 (adj. R 2 = 0.563, p 2 = 0.592) and side no influence on the correlation. Radius length–width correlation for the isolated radii was r = 0.621 (adj. R 2 = 0.381, p 2 = 0.598). Conclusion: A relatively strong radius length–distal width correlation was found in two different populations, indicating that linear body proportions might not only apply to body height and axial length measurements of long bones but also to proportional dependency of bone shapes in general.

  10. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  11. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    Science.gov (United States)

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation

  12. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    Science.gov (United States)

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our

  13. Dependence of US hurricane economic loss on maximum wind speed and storm size

    International Nuclear Information System (INIS)

    Zhai, Alice R; Jiang, Jonathan H

    2014-01-01

    Many empirical hurricane economic loss models consider only wind speed and neglect storm size. These models may be inadequate in accurately predicting the losses of super-sized storms, such as Hurricane Sandy in 2012. In this study, we examined the dependences of normalized US hurricane loss on both wind speed and storm size for 73 tropical cyclones that made landfall in the US from 1988 through 2012. A multi-variate least squares regression is used to construct a hurricane loss model using both wind speed and size as predictors. Using maximum wind speed and size together captures more variance of losses than using wind speed or size alone. It is found that normalized hurricane loss (L) approximately follows a power law relation with maximum wind speed (V max ) and size (R), L = 10 c V max a R b , with c determining an overall scaling factor and the exponents a and b generally ranging between 4–12 and 2–4 respectively. Both a and b tend to increase with stronger wind speed. Hurricane Sandy’s size was about three times of the average size of all hurricanes analyzed. Based on the bi-variate regression model that explains the most variance for hurricanes, Hurricane Sandy’s loss would be approximately 20 times smaller if its size were of the average size with maximum wind speed unchanged. It is important to revise conventional empirical hurricane loss models that are only dependent on maximum wind speed to include both maximum wind speed and size as predictors. (letters)

  14. A spectral measurement method for determining white OLED average junction temperatures

    Science.gov (United States)

    Zhu, Yiting; Narendran, Nadarajah

    2016-09-01

    The objective of this study was to investigate an indirect method of measuring the average junction temperature of a white organic light-emitting diode (OLED) based on temperature sensitivity differences in the radiant power emitted by individual emitter materials (i.e., "blue," "green," and "red"). The measured spectral power distributions (SPDs) of the white OLED as a function of temperature showed amplitude decrease as a function of temperature in the different spectral bands, red, green, and blue. Analyzed data showed a good linear correlation between the integrated radiance for each spectral band and the OLED panel temperature, measured at a reference point on the back surface of the panel. The integrated radiance ratio of the spectral band green compared to red, (G/R), correlates linearly with panel temperature. Assuming that the panel reference point temperature is proportional to the average junction temperature of the OLED panel, the G/R ratio can be used for estimating the average junction temperature of an OLED panel.

  15. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  16. Averages of ratios of the Riemann zeta-function and correlations of divisor sums

    Science.gov (United States)

    Conrey, Brian; Keating, Jonathan P.

    2017-10-01

    Nonlinearity has published articles containing a significant number-theoretic component since the journal was first established. We examine one thread, concerning the statistics of the zeros of the Riemann zeta function. We extend this by establishing a connection between the ratios conjecture for the Riemann zeta-function and a conjecture concerning correlations of convolutions of Möbius and divisor functions. Specifically, we prove that the ratios conjecture and an arithmetic correlations conjecture imply the same result. This provides new support for the ratios conjecture, which previously had been motivated by analogy with formulae in random matrix theory and by a heuristic recipe. Our main theorem generalises a recent calculation pertaining to the special case of two-over-two ratios.

  17. A New MPPT Control for Photovoltaic Panels by Instantaneous Maximum Power Point Tracking

    Science.gov (United States)

    Tokushima, Daiki; Uchida, Masato; Kanbei, Satoshi; Ishikawa, Hiroki; Naitoh, Haruo

    This paper presents a new maximum power point tracking control for photovoltaic (PV) panels. The control can be categorized into the Perturb and Observe (P & O) method. It utilizes instantaneous voltage ripples at PV panel output terminals caused by the switching of a chopper connected to the panel in order to identify the direction for the maximum power point (MPP). The tracking for the MPP is achieved by a feedback control of the average terminal voltage of the panel. Appropriate use of the instantaneous and the average values of the PV voltage for the separate purposes enables both the quick transient response and the good convergence with almost no ripples simultaneously. The tracking capability is verified experimentally with a 2.8 W PV panel under a controlled experimental setup. A numerical comparison with a conventional P & O confirms that the proposed control extracts much more power from the PV panel.

  18. 25(OHD3 Levels Relative to Muscle Strength and Maximum Oxygen Uptake in Athletes

    Directory of Open Access Journals (Sweden)

    Książek Anna

    2016-04-01

    Full Text Available Vitamin D is mainly known for its effects on the bone and calcium metabolism. The discovery of Vitamin D receptors in many extraskeletal cells suggests that it may also play a significant role in other organs and systems. The aim of our study was to assess the relationship between 25(OHD3 levels, lower limb isokinetic strength and maximum oxygen uptake in well-trained professional football players. We enrolled 43 Polish premier league soccer players. The mean age was 22.7±5.3 years. Our study showed decreased serum 25(OHD3 levels in 74.4% of the professional players. The results also demonstrated a lack of statistically significant correlation between 25(OHD3 levels and lower limb muscle strength with the exception of peak torque of the left knee extensors at an angular velocity of 150°/s (r=0.41. No significant correlations were found between hand grip strength and maximum oxygen uptake. Based on our study we concluded that in well-trained professional soccer players, there was no correlation between serum levels of 25(OHD3 and muscle strength or maximum oxygen uptake.

  19. Optimisation of sea surface current retrieval using a maximum cross correlation technique on modelled sea surface temperature

    Science.gov (United States)

    Heuzé, Céline; Eriksson, Leif; Carvajal, Gisela

    2017-04-01

    Using sea surface temperature from satellite images to retrieve sea surface currents is not a new idea, but so far its operational near-real time implementation has not been possible. Validation studies are too region-specific or uncertain, due to the errors induced by the images themselves. Moreover, the sensitivity of the most common retrieval method, the maximum cross correlation, to the three parameters that have to be set is unknown. Using model outputs instead of satellite images, biases induced by this method are assessed here, for four different seas of Western Europe, and the best of nine settings and eight temporal resolutions are determined. For all regions, tracking a small 5 km pattern from the first image over a large 30 km region around its original location on a second image, separated from the first image by 6 to 9 hours returned the most accurate results. Moreover, for all regions, the problem is not inaccurate results but missing results, where the velocity is too low to be picked by the retrieval. The results are consistent both with limitations caused by ocean surface current dynamics and with the available satellite technology, indicating that automated sea surface current retrieval from sea surface temperature images is feasible now, for search and rescue operations, pollution confinement or even for more energy efficient and comfortable ship navigation.

  20. Geotail observations of plasma sheet ion composition over 16 years: On variations of average plasma ion mass and O+ triggering substorm model

    Science.gov (United States)

    Nosé, M.; Ieda, A.; Christon, S. P.

    2009-07-01

    We examined long-term variations of ion composition in the plasma sheet, using energetic (9.4-212.1 keV/e) ion flux data obtained by the suprathermal ion composition spectrometer (STICS) sensor of the energetic particle and ion composition (EPIC) instrument on board the Geotail spacecraft. EPIC/STICS observations are available from 17 October 1992 for more than 16 years, covering the declining phase of solar cycle 22, all of solar cycle 23, and the early phase of solar cycle 24. This unprecedented long-term data set revealed that (1) the He+/H+ and O+/H+ flux ratios in the plasma sheet were dependent on the F10.7 index; (2) the F10.7 index dependence is stronger for O+/H+ than He+/H+; (3) the O+/H+ flux ratio is also weakly correlated with the ΣKp index; and (4) the He2+/H+ flux ratio in the plasma sheet appeared to show no long-term trend. From these results, we derived empirical equations related to plasma sheet ion composition and the F10.7 index and estimated that the average plasma ion mass changes from ˜1.1 amu during solar minimum to ˜2.8 amu during solar maximum. In such a case, the Alfvén velocity during solar maximum decreases to ˜60% of the solar minimum value. Thus, physical processes in the plasma sheet are considered to be much different between solar minimum and solar maximum. We also compared long-term variation of the plasma sheet ion composition with that of the substorm occurrence rate, which is evaluated by the number of Pi2 pulsations. No correlation or negative correlation was found between them. This result contradicts the O+ triggering substorm model, in which heavy ions in the plasma sheet increase the growth rate of the linear ion tearing mode and play an important role in localization and initiation of substorms. In contrast, O+ ions in the plasma sheet may prevent occurrence of substorms.

  1. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  2. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  3. Analysis for average heat transfer empirical correlation of natural convection on the concentric vertical cylinder modelling of APWR

    International Nuclear Information System (INIS)

    Daddy Setyawan

    2011-01-01

    There are several passive safety systems on APWR reactor design. One of the passive safety system is the cooling system with natural circulation air on the surface of concentric vertical cylinder containment wall. Since the natural circulation air performance in the Passive Containment Cooling System (PCCS) application is related to safety, the cooling characteristics of natural circulation air on concentric vertical cylinder containment wall should be studied experimentally. This paper focuses on the experimental study of the heat transfer coefficient of natural circulation air with heat flux level varied on the characteristics of APWR concentric vertical cylinder containment wall. The procedure of this experimental study is composed of 4 stages as follows: the design of APWR containment with scaling 1:40, the assembling of APWR containment with its instrumentation, calibration and experimentation. The experimentation was conducted in the transient and steady-state with the variation of heat flux, from 119 W/m 2 until 575 W/m 2 . From The experimentation result obtained average heat transfer empirical correlation of natural convection Nu L = 0,008(Ra * L ) 0,68 for the concentric vertical cylinder geometry modelling of APWR. (author)

  4. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  5. Efficacy of spatial averaging of infrasonic pressure in varying wind speeds

    International Nuclear Information System (INIS)

    DeWolf, Scott; Walker, Kristoffer T.; Zumberge, Mark A.; Denis, Stephane

    2013-01-01

    Wind noise reduction (WNR) is important in the measurement of infra-sound. Spatial averaging theory led to the development of rosette pipe arrays. The efficacy of rosettes decreases with increasing wind speed and only provides a maximum of 20 dB WNR due to a maximum size limitation. An Optical Fiber Infra-sound Sensor (OFIS) reduces wind noise by instantaneously averaging infra-sound along the sensor's length. In this study two experiments quantify the WNR achieved by rosettes and OFISs of various sizes and configurations. Specifically, it is shown that the WNR for a circular OFIS 18 m in diameter is the same as a collocated 32-inlet pipe array of the same diameter. However, linear OFISs ranging in length from 30 to 270 m provide a WNR of up to 30 dB in winds up to 5 m/s. The measured WNR is a logarithmic function of the OFIS length and depends on the orientation of the OFIS with respect to wind direction. OFISs oriented parallel to the wind direction achieve 4 dB greater WNR than those oriented perpendicular to the wind. Analytical models for the rosette and OFIS are developed that predict the general observed relationships between wind noise reduction, frequency, and wind speed. (authors)

  6. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y. [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France); Banerjee, S. [University of Louisville, Louisville, KY (United States); Ben-Haim, E. [Universite Paris Diderot, CNRS/IN2P3, LPNHE, Universite Pierre et Marie Curie, Paris (France); Bernlochner, F.; Dingfelder, J.; Duell, S. [University of Bonn, Bonn (Germany); Bozek, A. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Bozzi, C. [INFN, Sezione di Ferrara, Ferrara (Italy); Chrzaszcz, M. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Gersabeck, M. [University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom); Gershon, T. [University of Warwick, Department of Physics, Coventry (United Kingdom); Gerstel, D.; Serrano, J. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Goldenzweig, P. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Harr, R. [Wayne State University, Detroit, MI (United States); Hayasaka, K. [Niigata University, Niigata (Japan); Hayashii, H. [Nara Women' s University, Nara (Japan); Kenzie, M. [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); Kuhr, T. [Ludwig-Maximilians-University, Munich (Germany); Leroy, O. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Lusiani, A. [Scuola Normale Superiore, Pisa (Italy); INFN, Sezione di Pisa, Pisa (Italy); Lyu, X.R. [University of Chinese Academy of Sciences, Beijing (China); Miyabayashi, K. [Niigata University, Niigata (Japan); Naik, P. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Nanut, T. [J. Stefan Institute, Ljubljana (Slovenia); Oyanguren Campos, A. [Centro Mixto Universidad de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Patel, M. [Imperial College London, London (United Kingdom); Pedrini, D. [INFN, Sezione di Milano-Bicocca, Milan (Italy); Petric, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Rama, M. [INFN, Sezione di Pisa, Pisa (Italy); Roney, M. [University of Victoria, Victoria, BC (Canada); Rotondo, M. [INFN, Laboratori Nazionali di Frascati, Frascati (Italy); Schneider, O. [Institute of Physics, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne (Switzerland); Schwanda, C. [Institute of High Energy Physics, Vienna (Austria); Schwartz, A.J. [University of Cincinnati, Cincinnati, OH (United States); Shwartz, B. [Budker Institute of Nuclear Physics (SB RAS), Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Tesarek, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); Tonelli, D. [INFN, Sezione di Pisa, Pisa (Italy); Trabelsi, K. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); SOKENDAI (The Graduate University for Advanced Studies), Hayama (Japan); Urquijo, P. [School of Physics, University of Melbourne, Melbourne, VIC (Australia); Van Kooten, R. [Indiana University, Bloomington, IN (United States); Yelton, J. [University of Florida, Gainesville, FL (US); Zupanc, A. [J. Stefan Institute, Ljubljana (SI); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (SI); Collaboration: Heavy Flavor Averaging Group (HFLAV)

    2017-12-15

    This article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays, and Cabbibo-Kobayashi-Maskawa matrix elements. (orig.)

  7. Efficiency of Photovoltaic Maximum Power Point Tracking Controller Based on a Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Ammar Al-Gizi

    2017-07-01

    Full Text Available This paper examines the efficiency of a fuzzy logic control (FLC based maximum power point tracking (MPPT of a photovoltaic (PV system under variable climate conditions and connected load requirements. The PV system including a PV module BP SX150S, buck-boost DC-DC converter, MPPT, and a resistive load is modeled and simulated using Matlab/Simulink package. In order to compare the performance of FLC-based MPPT controller with the conventional perturb and observe (P&O method at different irradiation (G, temperature (T and connected load (RL variations – rising time (tr, recovering time, total average power and MPPT efficiency topics are calculated. The simulation results show that the FLC-based MPPT method can quickly track the maximum power point (MPP of the PV module at the transient state and effectively eliminates the power oscillation around the MPP of the PV module at steady state, hence more average power can be extracted, in comparison with the conventional P&O method.

  8. The true bladder dose: on average thrice higher than the ICRU reference

    International Nuclear Information System (INIS)

    Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.

    1996-01-01

    The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter

  9. Acumulación de hojarasca en un pastizal de Panicum maximum y en un sistema silvopastoril de Panicum maximum y Leucaena leucocephala Litter accumulation in a Panicum maximum grassland and in a silvopastoral system of Panicum maximum and Leucaena leucocephala

    Directory of Open Access Journals (Sweden)

    Saray Sánchez

    2007-09-01

    Full Text Available Se realizó un estudio en la Estación Experimental de Pastos y Forrajes "Indio Hatuey", Matanzas, Cuba, con el objetivo de determinar la acumulación de la hojarasca en un pastizal de Panicum maximum Jacq cv. Likoni y en un sistema silvopastoril de Panicum maximum y Leucaena leucocephala (Lam de Wit cv. Cunningham. En los pastizales de P. maximum de ambos sistemas se determinó la acumulación de la hojarasca según la técnica propuesta por Bruce y Ebershon (1982, mientras que la hojarasca de L. leucocephala acumulada en el sistema silvopastoril se determinó según Santa Regina et al. (1997. De forma general, los resultados demostraron que en ambos pastizales la guinea acumuló una menor cantidad de hojarasca durante el período junio-diciembre, etapa en la que se produce su mayor desarrollo vegetativo. En la leucaena la mayor producción de hojarasca ocurrió en el período de diciembre a enero, asociada con la caída natural de sus hojas que se produce por efecto de las temperaturas más bajas y la escasa humedad en el suelo. En el sistema silvopastoril la hojarasca de leucaena representó el mayor porcentaje de peso dentro de la producción total, con un contenido más alto de nitrógeno y de calcio que el de la hojarasca del estrato herbáceo. En la guinea la lluvia fue el factor climático que mayor correlación negativa presentó con la producción de hojarasca en ambos sistemas, y en la leucaena la mayor correlación negativa se encontró con la temperatura mínima.A study was carried out at the Experimental Station of Pastures and Forages "Indio Hatuey", Matanzas, Cuba, with the objective of determining the litter accumulation in a pastureland of Panicum maximum Jacq cv. Likoni and in a silvopastoral system of Panicum maximum and Leucaena leucocephala (Lam de Wit cv. Cunningham. In the P. maximum pasturelands of both systems the litter accumulation was determined by means of the technique proposed by Bruce and Ebershon (1982, while

  10. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  11. Maximum Diameter Measurements of Aortic Aneurysms on Axial CT Images After Endovascular Aneurysm Repair: Sufficient for Follow-up?

    International Nuclear Information System (INIS)

    Baumueller, Stephan; Nguyen, Thi Dan Linh; Goetti, Robert Paul; Lachat, Mario; Seifert, Burkhardt; Pfammatter, Thomas; Frauenfelder, Thomas

    2011-01-01

    Purpose: To assess the accuracy of maximum diameter measurements of aortic aneurysms after endovascular aneurysm repair (EVAR) on axial computed tomographic (CT) images in comparison to maximum diameter measurements perpendicular to the intravascular centerline for follow-up by using three-dimensional (3D) volume measurements as the reference standard. Materials and Methods: Forty-nine consecutive patients (73 ± 7.5 years, range 51–88 years), who underwent EVAR of an infrarenal aortic aneurysm were retrospectively included. Two blinded readers twice independently measured the maximum aneurysm diameter on axial CT images performed at discharge, and at 1 and 2 years after intervention. The maximum diameter perpendicular to the centerline was automatically measured. Volumes of the aortic aneurysms were calculated by dedicated semiautomated 3D segmentation software (3surgery, 3mensio, the Netherlands). Changes in diameter of 0.5 cm and in volume of 10% were considered clinically significant. Intra- and interobserver agreements were calculated by intraclass correlations (ICC) in a random effects analysis of variance. The two unidimensional measurement methods were correlated to the reference standard. Results: Intra- and interobserver agreements for maximum aneurysm diameter measurements were excellent (ICC = 0.98 and ICC = 0.96, respectively). There was an excellent correlation between maximum aneurysm diameters measured on axial CT images and 3D volume measurements (r = 0.93, P < 0.001) as well as between maximum diameter measurements perpendicular to the centerline and 3D volume measurements (r = 0.93, P < 0.001). Conclusion: Measurements of maximum aneurysm diameters on axial CT images are an accurate, reliable, and robust method for follow-up after EVAR and can be used in daily routine.

  12. FDG-PET/CT and diffusion-weighted imaging for resected lung cancer: correlation of maximum standardized uptake value and apparent diffusion coefficient value with prognostic factors.

    Science.gov (United States)

    Usuda, Katsuo; Funasaki, Aika; Sekimura, Atsushi; Motono, Nozomu; Matoba, Munetaka; Doai, Mariko; Yamada, Sohsuke; Ueda, Yoshimichi; Uramoto, Hidetaka

    2018-04-09

    Diffusion-weighted magnetic resonance imaging (DWI) is useful for detecting malignant tumors and the assessment of lymph nodes, as FDG-PET/CT is. But it is not clear how DWI influences the prognosis of lung cancer patients. The focus of this study is to evaluate the correlations between maximum standardized uptake value (SUVmax) of FDG-PET/CT and apparent diffusion coefficient (ADC) value of DWI with known prognostic factors in resected lung cancer. A total of 227 patients with resected lung cancers were enrolled in this study. FEG-PET/CT and DWI were performed in each patient before surgery. There were 168 patients with adenocarcinoma, 44 patients with squamous cell carcinoma, and 15 patients with other cell types. SUVmax was a factor that was correlated to T factor, N factor, or cell differentiation. ADC of lung cancer was a factor that was not correlated to T factor, or N factor. There was a significantly weak inverse relationship between SUVmax and ADC (Correlation coefficient r = - 0.227). In analysis of survival, there were significant differences between the categories of sex, age, pT factor, pN factor, cell differentiation, cell type, and SUVmax. Univariate analysis revealed that SUVmax, pN factor, age, cell differentiation, cell type, sex, and pT factor were significant factors. Multivariate analysis revealed that SUVmax and pN factor were independent significant prognostic factors. SUVmax was a significant prognostic factor that is correlated to T factor, N factor, or cell differentiation, but ADC was not. SUVmax may be more useful for predicting the prognosis of lung cancer than ADC values.

  13. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  14. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  15. Time averaging procedure for calculating the mass and energy transfer rates in adiabatic two phase flow

    International Nuclear Information System (INIS)

    Boccaccini, L.V.

    1986-07-01

    To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.) [de

  16. Improving consensus contact prediction via server correlation reduction.

    Science.gov (United States)

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  17. Improving consensus contact prediction via server correlation reduction

    Directory of Open Access Journals (Sweden)

    Xu Jinbo

    2009-05-01

    Full Text Available Abstract Background Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. Results In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Conclusion Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  18. CORRELATION AMONG DAMAGES CAUSED BY YELLOW BEETLE, CLIMATOLOGICAL ELEMENTS AND PRODUCTION OF GUAVA ACCESSES GROWN IN ORGANIC SYSTEM

    Directory of Open Access Journals (Sweden)

    JULIANA ALTAFIN GALLI

    Full Text Available ABSTRACT The objective of this research was evaluate the damage caused by the yellow beetle on 85 guava accessions and correlations of the damage with the climatological elements and the production of fruit in an orchard of guava conducted in organic system. Ten leaves by access were analyzed containing the injury of insect attack. Each leaf had its foliar area measured by leaf area meter and, after obtaining the total area, the leaf was covered with duct tape, and measure again. The averages were compared by Scott-Knott test at 5% probability. The 15 accessions with highest average damage had the data submitted to the correlation with the minimum and maximum temperature, precipitation and relative humidity. The production was obtained by the number of fruits/plant. The damages are negatively correlated with the mean relative humidity of 7:00h (local time in the period of 14 days prior to the assessments, and negatively affect production. The accessions Saito, L4P16, Monte Alto Comum 1 and L5P19 are promising in organic agriculture, for presenting good production and minor damage to insect attack, when compared to others.

  19. Two-proton correlation functions in nuclear reactions

    International Nuclear Information System (INIS)

    Verde, G.

    2001-01-01

    Full text: Proton-proton correlation functions can be used to study the space-time characteristics of nuclear reactions. For very short-lived sources, the maximum value of the correlation at 20 MeV/c, due to the attractive nature of the S-wave phase shift, provides a unique measure of the size of the emitting source. For long-lived sources, the height of this maximum depends, in addition, on the life time of the source. In this talk, we investigate the common reaction scenario involving both fast dynamical as well as slower emissions from evaporation and/or secondary decays of heavy fragments. We show that the maximum at 20 MeV/c depends both on the source dimension and on the fraction of coincident proton pairs produced in the early stage of the reaction, dominated by fast dynamical preequilibrium emission. The width of the peak at 20 MeV/c, on the other hand, is uniquely correlated to the size of the source. Hence, the size of the emitting source must be extracted from the width or, even better, from the entire shape of the correlation peak, and not from the height. By numerically inverting the measured correlation function, we show that existing data determine only the shape of the fast dynamical source and that its size changes little with proton momenta, contrary to previous analyses with Gaussian sources of zero-lifetime. We further show that the well documented dramatic decrease in the correlation maximum with decreasing total proton momentum reflects directly a corresponding decrease in the fraction of contributing proton pairs from preequilibrium emissions. This provides a powerful method to decompose the proton spectrum into a fraction that originates from fast dynamical emission and a complimentary fraction that originates from slower evaporative emission or secondary decays. We discuss also the comparison of such correlations to transport theories and the generalizations of these techniques to correlations between composite particles. Such studies can

  20. A simple maximum power point tracker for thermoelectric generators

    International Nuclear Information System (INIS)

    Paraskevas, Alexandros; Koutroulis, Eftichios

    2016-01-01

    Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.

  1. Linearity of electrical impedance tomography during maximum effort breathing and forced expiration maneuvers.

    Science.gov (United States)

    Ngo, Chuong; Leonhardt, Steffen; Zhang, Tony; Lüken, Markus; Misgeld, Berno; Vollmer, Thomas; Tenbrock, Klaus; Lehmann, Sylvia

    2017-01-01

    Electrical impedance tomography (EIT) provides global and regional information about ventilation by means of relative changes in electrical impedance measured with electrodes placed around the thorax. In combination with lung function tests, e.g. spirometry and body plethysmography, regional information about lung ventilation can be achieved. Impedance changes strictly correlate with lung volume during tidal breathing and mechanical ventilation. Initial studies presumed a correlation also during forced expiration maneuvers. To quantify the validity of this correlation in extreme lung volume changes during forced breathing, a measurement system was set up and applied on seven lung-healthy volunteers. Simultaneous measurements of changes in lung volume using EIT imaging and pneumotachography were obtained with different breathing patterns. Data was divided into a synchronizing phase (spontaneous breathing) and a test phase (maximum effort breathing and forced maneuvers). The EIT impedance changes correlate strictly with spirometric data during slow breathing with increasing and maximum effort ([Formula: see text]) and during forced expiration maneuvers ([Formula: see text]). Strong correlations in spirometric volume parameters [Formula: see text] ([Formula: see text]), [Formula: see text]/FVC ([Formula: see text]), and flow parameters PEF, [Formula: see text], [Formula: see text], [Formula: see text] ([Formula: see text]) were observed. According to the linearity during forced expiration maneuvers, EIT can be used during pulmonary function testing in combination with spirometry for visualisation of regional lung ventilation.

  2. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  3. Correlation of spatial climate/weather maps and the advantages of using the Mahalanobis metric in predictions

    Science.gov (United States)

    Stephenson, D. B.

    1997-10-01

    The skill in predicting spatially varying weather/climate maps depends on the definition of the measure of similarity between the maps. Under the justifiable approximation that the anomaly maps are distributed multinormally, it is shown analytically that the choice of weighting metric, used in defining the anomaly correlation between spatial maps, can change the resulting probability distribution of the correlation coefficient. The estimate of the numbers of degrees of freedom based on the variance of the correlation distribution can vary from unity up to the number of grid points depending on the choice of weighting metric. The (pseudo-) inverse of the sample covariance matrix acts as a special choice for the metric in that it gives a correlation distribution which has minimal kurtosis and maximum dimension. Minimal kurtosis suggests that the average predictive skill might be improved due to the rarer occurrence of troublesome outlier patterns far from the mean state. Maximum dimension has a disadvantage for analogue prediction schemes in that it gives the minimum number of analogue states. This metric also has an advantage in that it allows one to powerfully test the null hypothesis of multinormality by examining the second and third moments of the correlation coefficient which were introduced by Mardia as invariant measures of multivariate kurtosis and skewness. For these reasons, it is suggested that this metric could be usefully employed in the prediction of weather/climate and in fingerprinting anthropogenic climate change. The ideas are illustrated using the bivariate example of the observed monthly mean sea-level pressures at Darwin and Tahitifrom 1866 1995.

  4. Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2012-07-01

    This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.

  5. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  6. Comparative evaluation of average glandular dose and breast cancer detection between single-view digital breast tomosynthesis (DBT) plus single-view digital mammography (DM) and two-view DM: correlation with breast thickness and density

    International Nuclear Information System (INIS)

    Shin, Sung Ui; Chang, Jung Min; Bae, Min Sun; Lee, Su Hyun; Cho, Nariya; Seo, Mirinae; Kim, Won Hwa; Moon, Woo Kyung

    2015-01-01

    To compare the average glandular dose (AGD) and diagnostic performance of mediolateral oblique (MLO) digital breast tomosynthesis (DBT) plus cranio-caudal (CC) digital mammography (DM) with two-view DM, and to evaluate the correlation of AGD with breast thickness and density. MLO and CC DM and DBT images of both breasts were obtained in 149 subjects. AGDs of DBT and DM per exposure were recorded, and their correlation with breast thickness and density were evaluated. Paired data of MLO DBT plus CC DM and two-view DM were reviewed for presence of malignancy in a jack-knife alternative free-response ROC (JAFROC) method. The AGDs of both DBT and DM, and differences in AGD between DBT and DM (ΔAGD), were correlated with breast thickness and density. The average JAFROC figure of merit (FOM) was significantly higher on the combined technique than two-view DM (P = 0.005). In dense breasts, the FOM and sensitivity of the combined technique was higher than that of two-view DM (P = 0.003) with small ΔAGD. MLO DBT plus CC DM provided higher diagnostic performance than two-view DM in dense breasts with a small increase in AGD. (orig.)

  7. Comparative evaluation of average glandular dose and breast cancer detection between single-view digital breast tomosynthesis (DBT) plus single-view digital mammography (DM) and two-view DM: correlation with breast thickness and density

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Sung Ui; Chang, Jung Min; Bae, Min Sun; Lee, Su Hyun; Cho, Nariya; Seo, Mirinae; Kim, Won Hwa; Moon, Woo Kyung [Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of)

    2015-01-15

    To compare the average glandular dose (AGD) and diagnostic performance of mediolateral oblique (MLO) digital breast tomosynthesis (DBT) plus cranio-caudal (CC) digital mammography (DM) with two-view DM, and to evaluate the correlation of AGD with breast thickness and density. MLO and CC DM and DBT images of both breasts were obtained in 149 subjects. AGDs of DBT and DM per exposure were recorded, and their correlation with breast thickness and density were evaluated. Paired data of MLO DBT plus CC DM and two-view DM were reviewed for presence of malignancy in a jack-knife alternative free-response ROC (JAFROC) method. The AGDs of both DBT and DM, and differences in AGD between DBT and DM (ΔAGD), were correlated with breast thickness and density. The average JAFROC figure of merit (FOM) was significantly higher on the combined technique than two-view DM (P = 0.005). In dense breasts, the FOM and sensitivity of the combined technique was higher than that of two-view DM (P = 0.003) with small ΔAGD. MLO DBT plus CC DM provided higher diagnostic performance than two-view DM in dense breasts with a small increase in AGD. (orig.)

  8. 25 ns software correlator for photon and fluorescence correlation spectroscopy

    Science.gov (United States)

    Magatti, Davide; Ferri, Fabio

    2003-02-01

    A 25 ns time resolution, multi-tau software correlator developed in LABVIEW based on the use of a standard photon counting unit, a fast timer/counter board (6602-PCI National Instrument) and a personal computer (PC) (1.5 GHz Pentium 4) is presented and quantitatively discussed. The correlator works by processing the stream of incoming data in parallel according to two different algorithms: For large lag times (τ⩾100 μs), a classical time-mode (TM) scheme, based on the measure of the number of pulses per time interval, is used; differently, for τ⩽100 μs a photon-mode (PM) scheme is adopted and the time sequence of the arrival times of the photon pulses is measured. By combining the two methods, we developed a system capable of working out correlation functions on line, in full real time for the TM correlator and partially in batch processing for the PM correlator. For the latter one, the duty cycle depends on the count rate of the incoming pulses, being ˜100% for count rates ⩽3×104 Hz, ˜15% at 105 Hz, and ˜1% at 106 Hz. For limitations imposed by the fairly small first-in, first-out (FIFO) buffer available on the counter board, the maximum count rate permissible for a proper functioning of the PM correlator is limited to ˜105 Hz. However, this limit can be removed by using a board with a deeper FIFO. Similarly, the 25 ns time resolution is only limited by maximum clock frequency available on the 6602-PCI and can be easily improved by using a faster clock. When tested on dilute solutions of calibrated latex spheres, the overall performances of the correlator appear to be comparable with those of commercial hardware correlators, but with several nontrivial advantages related to its flexibility, low cost, and easy adaptability to future developments of PC and data acquisition technology.

  9. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  10. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  11. Maximum Power Point Tracking (MPPT Pada Sistem Pembangkit Listrik Tenaga Angin Menggunakan Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    Muhamad Otong

    2017-05-01

    Full Text Available In this paper, the implementation of the Maximum Power Point Tracking (MPPT technique is developed using buck-boost converter. Perturb and observe (P&O MPPT algorithm is used to searching maximum power from the wind power plant for charging of the battery. The model used in this study is the Variable Speed Wind Turbine (VSWT with a Permanent Magnet Synchronous Generator (PMSG. Analysis, design, and modeling of wind energy conversion system has done using MATLAB/simulink. The simulation results show that the proposed MPPT produce a higher output power than the system without MPPT. The average efficiency that can be achieved by the proposed system to transfer the maximum power into battery is 90.56%.

  12. Allometries of Maximum Growth Rate versus Body Mass at Maximum Growth Indicate That Non-Avian Dinosaurs Had Growth Rates Typical of Fast Growing Ectothermic Sauropsids

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case’s study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either

  13. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of

  14. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Directory of Open Access Journals (Sweden)

    Jan Werner

    Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule

  15. ship between IS-month mating mass and average lifetime repro

    African Journals Online (AJOL)

    1976; Elliol, Rae & Wickham, 1979; Napier, et af., 1980). Although being in general agreement with results in the literature, it is evident that the present phenotypic correlations between I8-month mating mass and average lifetime lambing and weaning rate tended to be equal to the highest comparable estimates in the ...

  16. Proton transport properties of poly(aspartic acid) with different average molecular weights

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)

    2011-04-15

    Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.

  17. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  18. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  19. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  20. Spatial correlations in compressible granular flows

    OpenAIRE

    Van Noije, T. P. C.; Ernst, M. H.; Brito López, Ricardo

    1998-01-01

    The clustering instability in freely evolving granular fluids manifests itself in the density-density correlation function and structure factor. These functions are calculated from fluctuating hydrodynamics. As time increases, the structure factor of density fluctuations develops a maximum, which shifts to smaller wave numbers (growing correlation length). Furthermore, the inclusion of longitudinal velocity fluctuations changes long-range correlations in the flow field qualitatively and exten...

  1. Gravitational wave chirp search: no-signal cumulative distribution of the maximum likelihood detection statistic

    International Nuclear Information System (INIS)

    Croce, R P; Demma, Th; Longo, M; Marano, S; Matta, V; Pierro, V; Pinto, I M

    2003-01-01

    The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality

  2. Ethanol production and maximum cell growth are highly correlated with membrane lipid composition during fermentation as determined by lipidomic analysis of 22 Saccharomyces cerevisiae strains.

    Science.gov (United States)

    Henderson, Clark M; Lozada-Contreras, Michelle; Jiranek, Vladimir; Longo, Marjorie L; Block, David E

    2013-01-01

    Optimizing ethanol yield during fermentation is important for efficient production of fuel alcohol, as well as wine and other alcoholic beverages. However, increasing ethanol concentrations during fermentation can create problems that result in arrested or sluggish sugar-to-ethanol conversion. The fundamental cellular basis for these problem fermentations, however, is not well understood. Small-scale fermentations were performed in a synthetic grape must using 22 industrial Saccharomyces cerevisiae strains (primarily wine strains) with various degrees of ethanol tolerance to assess the correlation between lipid composition and fermentation kinetic parameters. Lipids were extracted at several fermentation time points representing different growth phases of the yeast to quantitatively analyze phospholipids and ergosterol utilizing atmospheric pressure ionization-mass spectrometry methods. Lipid profiling of individual fermentations indicated that yeast lipid class profiles do not shift dramatically in composition over the course of fermentation. Multivariate statistical analysis of the data was performed using partial least-squares linear regression modeling to correlate lipid composition data with fermentation kinetic data. The results indicate a strong correlation (R(2) = 0.91) between the overall lipid composition and the final ethanol concentration (wt/wt), an indicator of strain ethanol tolerance. One potential component of ethanol tolerance, the maximum yeast cell concentration, was also found to be a strong function of lipid composition (R(2) = 0.97). Specifically, strains unable to complete fermentation were associated with high phosphatidylinositol levels early in fermentation. Yeast strains that achieved the highest cell densities and ethanol concentrations were positively correlated with phosphatidylcholine species similar to those known to decrease the perturbing effects of ethanol in model membrane systems.

  3. Mental health care and average happiness: strong effect in developed nations.

    Science.gov (United States)

    Touburg, Giorgio; Veenhoven, Ruut

    2015-07-01

    Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.

  4. Maximum one-shot dissipated work from Rényi divergences

    Science.gov (United States)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  5. Evaluation of Navigation System Accuracy Indexes for Deviation Reading from Average Range

    Directory of Open Access Journals (Sweden)

    Alexey Boykov

    2017-12-01

    Full Text Available The method for estimating the mean of square error, kurtosis and error correlation coefficient for deviations from the average range of three navigation parameter indications from the outputs of three information sensors is substantiated and developed.

  6. [Correlation and concordance between the national test of medicine (ENAM) and the grade point average (GPA): analysis of the peruvian experience in the period 2007 - 2009].

    Science.gov (United States)

    Huamaní, Charles; Gutiérrez, César; Mezones-Holguín, Edward

    2011-03-01

    To evaluate the correlation and concordance between the 'Peruvian National Exam of Medicine' (ENAM) and the Mean Grade Point Average (GPA) in recently graduated medical students in the period 2007 to 2009. We carried out a secondary data analysis, using the records of the physicians applying to the Rural and Urban Marginal Service in Health of Peru (SERUMS) processes for the years 2008 to 2010. We extracted from these registers, the grades obtained in the ENAM and GPA. We performed a descriptive analysis using medians and 1st and 3rd quartiles (q1/q3); we calculated the correlation between both scores using the Spearman correlation coefficient, additionally, we conducted a lineal regression analysis, and the concordance was measured using the Bland and Altman coefficient. A total of 6 117 physicians were included, the overall median for the GPA was 13.4 (12.7/14.2) and for the ENAM was 11.6 (10.2/13.0).Of the total assessed, 36.8% failed the TEST. We observed an increase in annual median of ENAM scores, with the consequent decrease in the difference between both grades. The correlation between ENAM and PPU is direct and moderate (0.582), independent from the year, type of university management (Public or Private) and location. However, the concordance between both ratings is regular, with a global coefficient of 0.272 (CI 95%: 0.260 to 0.284). Independently of the year, location or type of university management, there is a moderate correlation between the ENAM and the PPU; however, there is only a regular concordance between both grades.

  7. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  8. Longitudinal Patterns of Employment and Postsecondary Education for Adults with Autism and Average-Range IQ

    Science.gov (United States)

    Taylor, Julie Lounds; Henninger, Natalie A.; Mailick, Marsha R.

    2015-01-01

    This study examined correlates of participation in postsecondary education and employment over 12?years for 73 adults with autism spectrum disorders and average-range IQ whose families were part of a larger, longitudinal study. Correlates included demographic (sex, maternal education, paternal education), behavioral (activities of daily living,…

  9. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere

    Science.gov (United States)

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-01-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  10. Crustal seismicity and the earthquake catalog maximum moment magnitude (Mcmax) in stable continental regions (SCRs): Correlation with the seismic velocity of the lithosphere

    Science.gov (United States)

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-12-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  11. Criticality evaluation of BWR MOX fuel transport packages using average Pu content

    International Nuclear Information System (INIS)

    Mattera, C.; Martinotti, B.

    2004-01-01

    Currently in France, criticality studies in transport configurations for Boiling Water Reactor Mixed Oxide fuel assemblies are based on conservative hypothesis assuming that all rods (Mixed Oxide (Uranium and Plutonium), Uranium Oxide, Uranium and Gadolinium Oxide rods) are Mixed Oxide rods with the same Plutonium-content, corresponding to the maximum value. In that way, the real heterogeneous mapping of the assembly is masked and covered by a homogeneous Plutonium-content assembly, enriched at the maximum value. As this calculation hypothesis is extremely conservative, COGEMA LOGISTICS has studied a new calculation method based on the average Plutonium-content in the criticality studies. The use of the average Plutonium-content instead of the real Plutonium-content profiles provides a highest reactivity value that makes it globally conservative. This method can be applied for all Boiling Water Reactor Mixed Oxide complete fuel assemblies of type 8 x 8, 9 x 9 and 10 x 10 which Plutonium-content in mass weight does not exceed 15%; it provides advantages which are discussed in our approach. With this new method, for the same package reactivity, the Pu-content allowed in the package design approval can be higher. The COGEMA LOGISTICS' new method allows, at the design stage, to optimise the basket, materials or geometry for higher payload, keeping the same reactivity

  12. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  13. Forecasting Kp from solar wind data: input parameter study using 3-hour averages and 3-hour range values

    Science.gov (United States)

    Wintoft, Peter; Wik, Magnus; Matzka, Jürgen; Shprits, Yuri

    2017-11-01

    We have developed neural network models that predict Kp from upstream solar wind data. We study the importance of various input parameters, starting with the magnetic component Bz, particle density n, and velocity V and then adding total field B and the By component. As we also notice a seasonal and UT variation in average Kp we include functions of day-of-year and UT. Finally, as Kp is a global representation of the maximum range of geomagnetic variation over 3-hour UT intervals we conclude that sudden changes in the solar wind can have a big effect on Kp, even though it is a 3-hour value. Therefore, 3-hour solar wind averages will not always appropriately represent the solar wind condition, and we introduce 3-hour maxima and minima values to some degree address this problem. We find that introducing total field B and 3-hour maxima and minima, derived from 1-minute solar wind data, have a great influence on the performance. Due to the low number of samples for high Kp values there can be considerable variation in predicted Kp for different networks with similar validation errors. We address this issue by using an ensemble of networks from which we use the median predicted Kp. The models (ensemble of networks) provide prediction lead times in the range 20-90 min given by the time it takes a solar wind structure to travel from L1 to Earth. Two models are implemented that can be run with real time data: (1) IRF-Kp-2017-h3 uses the 3-hour averages of the solar wind data and (2) IRF-Kp-2017 uses in addition to the averages, also the minima and maxima values. The IRF-Kp-2017 model has RMS error of 0.55 and linear correlation of 0.92 based on an independent test set with final Kp covering 2 years using ACE Level 2 data. The IRF-Kp-2017-h3 model has RMSE = 0.63 and correlation = 0.89. We also explore the errors when tested on another two-year period with real-time ACE data which gives RMSE = 0.59 for IRF-Kp-2017 and RMSE = 0.73 for IRF-Kp-2017-h3. The errors as function

  14. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    Science.gov (United States)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  15. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  16. Size ratio correlates with intracranial aneurysm rupture status: a prospective study.

    Science.gov (United States)

    Rahman, Maryam; Smietana, Janel; Hauck, Erik; Hoh, Brian; Hopkins, Nick; Siddiqui, Adnan; Levy, Elad I; Meng, Hui; Mocco, J

    2010-05-01

    The prediction of intracranial aneurysm (IA) rupture risk has generated significant controversy. The findings of the International Study of Unruptured Intracranial Aneurysms (ISUIA) that small anterior circulation aneurysms (IAs are small. These discrepancies have led to the search for better aneurysm parameters to predict rupture. We previously reported that size ratio (SR), IA size divided by parent vessel diameter, correlated strongly with IA rupture status (ruptured versus unruptured). These data were all collected retrospectively off 3-dimensional angiographic images. Therefore, we performed a blinded prospective collection and evaluation of SR data from 2-dimensional angiographic images for a consecutive series of patients with ruptured and unruptured IAs. We prospectively enrolled 40 consecutive patients presenting to a single institution with either ruptured IA or for first-time evaluation of an incidental IA. Blinded technologists acquired all measurements from 2-dimensional angiographic images. Aneurysm rupture status, location, IA maximum size, and parent vessel diameter were documented. The SR was calculated by dividing the aneurysm size (mm) by the average parent vessel size (mm). A 2-tailed Mann-Whitney test was performed to assess statistical significance between ruptured and unruptured groups. Fisher exact test was used to compare medical comorbidities between the ruptured and unruptured groups. Significant differences between the 2 groups were subsequently tested with logistic regression. SE and probability values are reported. Forty consecutive patients with 24 unruptured and 16 ruptured aneurysms met the inclusion criteria. No significant differences were found in age, gender, smoking status, or medical comorbidities between ruptured and unruptured groups. The average maximum size of the unruptured IAs (6.18 + or - 0.60 mm) was significantly smaller compared with the ruptured IAs (7.91 + or - 0.47 mm; P=0.03), and the unruptured group had

  17. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  18. Image-based correlation between the meso-scale structure and deformation of closed-cell foam

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yongle, E-mail: yongle.sun@manchester.ac.uk [School of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Sackville Street, Manchester M13 9PL (United Kingdom); Zhang, Xun [Henry Moseley X-ray Imaging Facility, School of Materials, The University of Manchester, Upper Brook Street, Manchester M13 9PL (United Kingdom); Shao, Zhushan [School of Civil Engineering, Xi' an University of Architecture & Technology, Xi' an 710055 (China); Li, Q.M. [School of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Sackville Street, Manchester M13 9PL (United Kingdom); State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, Beijing 100081 (China)

    2017-03-14

    In the correlation between structural parameters and compressive behaviour of cellular materials, previous studies have mostly focused on averaged structural parameters and bulk material properties for different samples. This study focuses on the meso-scale correlation between structure and deformation in a 2D foam sample generated from a computed tomography slice of Alporas™ foam, for which quasi-static compression was simulated using 2D image-based finite element modelling. First, a comprehensive meso-scale structural characterisation of the 2D foam was carried out to determine the size, aspect ratio, orientation and anisotropy of individual cells, as well as the length, straightness, inclination and thickness of individual cell walls. Measurements were then conducted to obtain the axial distributions of local structural parameters averaged laterally to compression axis. Second, the meso-scale deformation was characterised by cell-wall strain, cell area ratio, digital image correlation strain and local compressive engineering strain. According to the results, the through-width sub-regions over an axial length between the average (lower bound) and the maximum (upper bound) of cell size should be used to characterise the meso-scale heterogeneity of the cell structure and deformation. It was found that the first crush band forms in a sub-region where the ratio of cell-wall thickness to cell-wall length is a minimum, in which the collapse deformation is dominated by the plastic bending and buckling of cell walls. Other morphological parameters have secondary effect on the initiation of crush band in the 2D foam. The finding of this study suggests that the measurement of local structural properties is crucial for the identification of the “weakest” region which determines the initiation of collapse and hence the corresponding collapse load of a heterogeneous cellular material.

  19. Image-based correlation between the meso-scale structure and deformation of closed-cell foam

    International Nuclear Information System (INIS)

    Sun, Yongle; Zhang, Xun; Shao, Zhushan; Li, Q.M.

    2017-01-01

    In the correlation between structural parameters and compressive behaviour of cellular materials, previous studies have mostly focused on averaged structural parameters and bulk material properties for different samples. This study focuses on the meso-scale correlation between structure and deformation in a 2D foam sample generated from a computed tomography slice of Alporas™ foam, for which quasi-static compression was simulated using 2D image-based finite element modelling. First, a comprehensive meso-scale structural characterisation of the 2D foam was carried out to determine the size, aspect ratio, orientation and anisotropy of individual cells, as well as the length, straightness, inclination and thickness of individual cell walls. Measurements were then conducted to obtain the axial distributions of local structural parameters averaged laterally to compression axis. Second, the meso-scale deformation was characterised by cell-wall strain, cell area ratio, digital image correlation strain and local compressive engineering strain. According to the results, the through-width sub-regions over an axial length between the average (lower bound) and the maximum (upper bound) of cell size should be used to characterise the meso-scale heterogeneity of the cell structure and deformation. It was found that the first crush band forms in a sub-region where the ratio of cell-wall thickness to cell-wall length is a minimum, in which the collapse deformation is dominated by the plastic bending and buckling of cell walls. Other morphological parameters have secondary effect on the initiation of crush band in the 2D foam. The finding of this study suggests that the measurement of local structural properties is crucial for the identification of the “weakest” region which determines the initiation of collapse and hence the corresponding collapse load of a heterogeneous cellular material.

  20. A preliminary study to find out maximum occlusal bite force in Indian individuals

    DEFF Research Database (Denmark)

    Jain, Veena; Mathur, Vijay Prakash; Pillai, Rajath

    2014-01-01

    PURPOSE: This preliminary hospital based study was designed to measure the mean maximum bite force (MMBF) in healthy Indian individuals. An attempt was made to correlate MMBF with body mass index (BMI) and some of the anthropometric features. METHODOLOGY: A total of 358 healthy subjects in the ag...

  1. Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.

    Science.gov (United States)

    Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N

    2014-01-01

    Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites  =  approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.

  2. Global correlations between maximum magnitudes of subduction zone interface thrust earthquakes and physical parameters of subduction zones

    NARCIS (Netherlands)

    Schellart, W. P.; Rawlinson, N.

    2013-01-01

    The maximum earthquake magnitude recorded for subduction zone plate boundaries varies considerably on Earth, with some subduction zone segments producing giant subduction zone thrust earthquakes (e.g. Chile, Alaska, Sumatra-Andaman, Japan) and others producing relatively small earthquakes (e.g.

  3. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  4. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  5. Experimental Warming Decreases the Average Size and Nucleic Acid Content of Marine Bacterial Communities

    KAUST Repository

    Huete-Stauffer, Tamara M.

    2016-05-23

    Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.

  6. Experimental Warming Decreases the Average Size and Nucleic Acid Content of Marine Bacterial Communities

    KAUST Repository

    Huete-Stauffer, Tamara M.; Arandia-Gorostidi, Nestor; Alonso-Sá ez, Laura; Moran, Xose Anxelu G.

    2016-01-01

    Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.

  7. A new sentence generator providing material for maximum reading speed measurement.

    Science.gov (United States)

    Perrin, Jean-Luc; Paillé, Damien; Baccino, Thierry

    2015-12-01

    A new method is proposed to generate text material for assessing maximum reading speed of adult readers. The described procedure allows one to generate a vast number of equivalent short sentences. These sentences can be displayed for different durations in order to determine the reader's maximum speed using a psychophysical threshold algorithm. Each sentence is built so that it is either true or false according to common knowledge. The actual reading is verified by asking the reader to determine the truth value of each sentence. We based our design on the generator described by Crossland et al. and upgraded it. The new generator handles concepts distributed in an ontology, which allows an easy determination of the sentences' truth value and control of lexical and psycholinguistic parameters. In this way many equivalent sentence can be generated and displayed to perform the measurement. Maximum reading speed scores obtained with pseudo-randomly chosen sentences from the generator were strongly correlated with maximum reading speed scores obtained with traditional MNREAD sentences (r = .836). Furthermore, the large number of sentences that can be generated makes it possible to perform repeated measurements, since the possibility of a reader learning individual sentences is eliminated. Researchers interested in within-reader performance variability could use the proposed method for this purpose.

  8. Correlation of Descriptive Analysis and Instrumental Puncture Testing of Watermelon Cultivars.

    Science.gov (United States)

    Shiu, J W; Slaughter, D C; Boyden, L E; Barrett, D M

    2016-06-01

    The textural properties of 5 seedless watermelon cultivars were assessed by descriptive analysis and the standard puncture test using a hollow probe with increased shearing properties. The use of descriptive analysis methodology was an effective means of quantifying watermelon sensory texture profiles for characterizing specific cultivars' characteristics. Of the 10 cultivars screened, 71% of the variation in the sensory attributes was measured using the 1st 2 principal components. Pairwise correlation of the hollow puncture probe and sensory parameters determined that initial slope, maximum force, and work after maximum force measurements all correlated well to the sensory attributes crisp and firm. These findings confirm that maximum force correlates well with not only firmness in watermelon, but crispness as well. The initial slope parameter also captures the sensory crispness of watermelon, but is not as practical to measure in the field as maximum force. The work after maximum force parameter is thought to reflect cellular arrangement and membrane integrity that in turn impact sensory firmness and crispness. Watermelon cultivar types were correctly predicted by puncture test measurements in heart tissue 87% of the time, although descriptive analysis was correct 54% of the time. © 2016 Institute of Food Technologists®

  9. Fluxes by eddy correlation over heterogeneous landscape: How shall we apply the Reynolds average?

    Science.gov (United States)

    Dobosy, R.

    2007-12-01

    Top-down estimates of carbon exchange across the earth's surface are implicitly an integral scheme, deriving bulk exchanges over large areas. Bottom-up estimates explicitly integrate the individual components of exchange to derive a bulk value. If these approaches are to be properly compared, their estimates should represent the same quantity. Over heterogeneous landscape, eddy-covariance flux computations from towers or aircraft intended for comparison with top-down approach face a question of the proper definition of the mean or base state, the departures from which yield the fluxes by Reynolds averaging. 1)≠Use a global base state derived over a representative sample of the surface, insensitive to land use. The departure quantities then fail to sum to zero over any subsample representing an individual surface type, violating Reynolds criteria. Yet fluxes derived from such subsamples can be directly composed into a bulk flux, globally satisfying Reynolds criteria. 2)≠Use a different base state for each surface type. satisfying Reynolds criteria individually. Then some of the flux may get missed if a surface's characteristics significantly bias its base state. Base state≠(2) is natural for tower samples. Base state≠(1) is natural for airborne samples over heterogeneous landscape, especially in patches smaller than an appropriate averaging length. It appears (1) incorporates a more realistic sample of the flux, though desirably there would be no practical difference between the two schemes. The schemes are related by the expression w¯*a*)C - w¯'a¯')C = w¯'ã¯)C+ wtilde ¯a¯')C+ wtilde ¯ã¯)C Here w is vertical motion, and a is some scalar, such as CO2. The star denotes departure from the global base state≠(1), and the prime from the base state≠(2), defined only over surface class≠C. The overbar with round bracket denotes average over samples drawn from class≠C, determined by footprint model. Thus a¯')C = 0 but a¯*)C ≠ 0 in general. The

  10. The correlation between dengue incidence and diurnal ranges of temperature of Colombo district, Sri Lanka 2005–2014

    Directory of Open Access Journals (Sweden)

    N. D. B. Ehelepola

    2016-08-01

    Full Text Available Background: Meteorological factors affect dengue transmission. Mechanisms of the way in which different diurnal temperatures, ranging around different mean temperatures, influence dengue transmission were published after 2011. Objective: We endeavored to determine the correlation between dengue incidence and diurnal temperature ranges (DTRs in Colombo district, Sri Lanka, and to explore the possibilities of using our findings to improve control of dengue. Design: We calculated the weekly dengue incidence in Colombo during 2005–2014, after data on all of the reported dengue patients and estimated mid-year populations were collected. We obtained daily maximum and minimum temperatures from two Colombo weather stations, averaged, and converted them into weekly data. Weekly averages of DTR versus dengue incidence graphs were plotted and correlations observed. The count of days per week with a DTR of >7.5°C and 7.5°C with an 8-week lag period, and a positive correlation between dengue incidence and a DTR<7.5°C, also with an 8-week lag. Conclusions: Large DTRs were negatively correlated with dengue transmission in Colombo district. We propose to take advantage of that in local dengue control efforts. Our results agree with previous studies on the topic and with a mathematical model of relative vectorial capacity of Aedes aegypti. Global warming and declining DTR are likely to favor a rise of dengue, and we suggest a simple method to mitigate this.

  11. Personal measures of power-frequency magnetic field exposure among men from an infertility clinic: distribution, temporal variability and correlation with their female partners' exposure

    International Nuclear Information System (INIS)

    Lewis, Ryan C.; Hauser, Russ; Maynard, Andrew D.; Neitzel, Richard L.; Meeker, John D.; Wang, Lu; Kavet, Robert; Morey, Patricia; Ford, Jennifer B.

    2016-01-01

    Power-frequency magnetic field exposure science as it relates to men and couples have not been explored despite the advantage of this information in the design and interpretation of reproductive health epidemiology studies. This analysis examined the distribution and temporal variability of exposures in men, and the correlation of exposures within couples using data from a longitudinal study of 25 men and their female partners recruited from an infertility clinic. The average and 90. percentile demonstrated fair to good reproducibility, whereas the maximum showed poor reproducibility over repeated sampling days, each separated by a median of 4.6 weeks. Average magnetic field exposures were also strongly correlated within couples, suggesting that one partner's data could be used as a surrogate in the absence of data from the other for this metric. Environment was also an important effect modifier in these explored matters. These issues should be considered in future relevant epidemiology studies. (authors)

  12. The Last Glacial Maximum in the Northern European loess belt: Correlations between loess-paleosol sequences and the Dehner Maar core (Eifel Mountains)

    Science.gov (United States)

    Zens, Joerg; Krauß, Lydia; Römer, Wolfgang; Klasen, Nicole; Pirson, Stéphane; Schulte, Philipp; Zeeden, Christian; Sirocko, Frank; Lehmkuhl, Frank

    2016-04-01

    The D1 project of the CRC 806 "Our way to Europe" focusses on Central Europe as a destination of modern human dispersal out of Africa. The paleo-environmental conditions along the migration areas are reconstructed by loess-paleosol sequences and lacustrine sediments. Stratigraphy and luminescence dating provide the chronological framework for the correlation of grain size and geochemical data to large-scale climate proxies like isotope ratios and dust content of Greenland ice cores. The reliability of correlations is improved by the development of precise age models of specific marker beds. In this study, we focus on the (terrestrial) Last Glacial Maximum of the Weichselian Upper Pleniglacial which is supposed to be dominated by high wind speeds and an increasing aridity. Especially in the Lower Rhine Embayment (LRE), this period is linked to an extensive erosion event. The disconformity is followed by an intensive cryosol formation. In order to support the stratigraphical observations from the field, luminescence dating and grain size analysis were applied on three loess-paleosol sequences along the northern European loess belt to develop a more reliable chronology and to reconstruct paleo-environmental dynamics. The loess sections were compared to newest results from heavy mineral and grain size analysis from the Dehner Maar core (Eifel Mountains) and correlated to NGRIP records. Volcanic minerals can be found in the Dehner Maar core from a visible tephra layer at 27.8 ka up to ~25 ka. They can be correlated to the Eltville Tephra found in loess section. New quartz luminescence ages from Romont (Belgium) surrounding the tephra dated the deposition between 25.0 + 2.3 ka and 25.8 + 2.4 ka. In the following, heavy minerals show an increasing importance of strong easterly winds during the second Greenland dust peak (~24 ka b2k) correlating with an extensive erosion event in the LRE. Luminescence dating on quartz bracketing the following soil formation yielded ages of

  13. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  14. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  15. Influence of epoxy resin as encapsulation material of silicon photovoltaic cells on maximum current

    Directory of Open Access Journals (Sweden)

    Acevedo-Gómez David

    2017-01-01

    Full Text Available This work presents an analysis about how the performance of silicon photovoltaic cells is influenced by the use of epoxy resin as encapsulation material with flat roughness. The effect of encapsulation on current at maximum power of mono-crystalline cell was tested indoor in a solar simulator bench at 1000 w/m² and AM1.5G. The results show that implementation of flat roughness layer onto cell surface reduces the maximum current inducing on average 2.7% less power with respect to a cell before any encapsulation. The losses of power and, in consequence, the less production of energy are explained by resin light absorption, reflection and partially neutralization of non-reflective coating.

  16. Multifractal Detrended Cross-Correlation Analysis of agricultural futures markets

    International Nuclear Information System (INIS)

    He Lingyun; Chen Shupeng

    2011-01-01

    Highlights: → We investigated cross-correlations between China's and US agricultural futures markets. → Power-law cross-correlations are found between the geographically far but correlated markets. → Multifractal features are significant in all the markets. → Cross-correlation exponent is less than averaged GHE when q 0. - Abstract: We investigated geographically far but temporally correlated China's and US agricultural futures markets. We found that there exists a power-law cross-correlation between them, and that multifractal features are significant in all the markets. It is very interesting that the geographically far markets show strong cross-correlations and share much of their multifractal structure. Furthermore, we found that for all the agricultural futures markets in our studies, the cross-correlation exponent is less than the averaged generalized Hurst exponents (GHE) when q 0.

  17. Extractable Work from Correlations

    Directory of Open Access Journals (Sweden)

    Martí Perarnau-Llobet

    2015-10-01

    Full Text Available Work and quantum correlations are two fundamental resources in thermodynamics and quantum information theory. In this work, we study how to use correlations among quantum systems to optimally store work. We analyze this question for isolated quantum ensembles, where the work can be naturally divided into two contributions: a local contribution from each system and a global contribution originating from correlations among systems. We focus on the latter and consider quantum systems that are locally thermal, thus from which any extractable work can only come from correlations. We compute the maximum extractable work for general entangled states, separable states, and states with fixed entropy. Our results show that while entanglement gives an advantage for small quantum ensembles, this gain vanishes for a large number of systems.

  18. Chromospheric oscillations observed with OSO 8. III. Average phase spectra for Si II

    International Nuclear Information System (INIS)

    White, O.R.; Athay, R.G.

    1979-01-01

    Time series of intensity and Doppler-shift fluctuations in the Si II emission lines lambda816.93 and lambda817.45 are Fourier analyzed to determine the frequency variation of phase differences between intensity and velocity and between these two lines formed 300 km apart in the middle chromosphere. Average phase spectra show that oscillations between 2 and 9 mHz in the two lines have time delays from 35 to 40 s, which is consistent with the upward propagation of sound wave at 8.6-7.5 km s -1 . In this same frequency band near 3 mHz, maximum brightness leads maximum blueshift by 60 0 . At frequencies above 11 mHz where the power spectrum is flat, the phase differences are uncertain, but approximately 65% of the cases indicate upward propagation. At these higher frequencies, the phase lead between intensity and blue Doppler shift ranges from 0 0 to 180 0 with an average value of 90 0 . However, the phase estimates in this upper band are corrupted by both aliasing and randomness inherent to the measured signals. Phase differences in the two narrow spectral features seen at 10.5 and 27 mHz in the power spectra are shown to be consistent with properties expected for aliases of the wheel rotation rate of the spacecraft wheel section

  19. High-throughput machining using high average power ultrashort pulse lasers and ultrafast polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-03-01

    In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.

  20. Nonvolcanic tremors and their correlation with slow slip events in Mexico

    Science.gov (United States)

    Kolstoglodov, V.; Shapiro, N. M.; Larson, K.; Payero, J.; Husker, A.; Santiago, L. A.; Clayton, R.; Peyrat, S.

    2009-04-01

    Significant activity of nonvolcanic tremor (NVT) has been observed in the central Mexico (Guerrero) subduction zone since 2001 when continuous seismic records became available. Albeit the quality of these records is poor, it is possible to estimate a temporal variation of energy in the range of 1-2Hz (best signal/noise ratio for the NVT), which clearly indicate the maximum of NVT energy release (En) during the 2001-2002 and 2006 large aseismic slow slip events (SSE) registered by a GPS network. In particular the En is higher for the 2001-2002 SSE which had larger surface displacements and extension than the 2006 SSE. A more detailed and accurate study of NVT activity was carried out using the data collected during the MASE experiment in Mexico. MASE consisted of 100 broad band seismometers in operation for ~2.5 years (2005-2007) along the profile oriented SSW-NNE from Acapulco, and crossing over the subduction zone for a distance of ~500 km. Epicenters and depths of individual tremor events determined using the envelope cross-correlation technique have rather large uncertainties partly originated from the essentially 2D geometry of the network. The "energy" approach is more efficient in this case because it provides an average NVT activity evolution in time and space. The data processing consists of a band pass (1-2Hz) filter of the raw 100 Hz sampled N-S component records, application a 10 min-width median filter to eliminate an effect of local seismic events and noise, and integration of the energy and normalization of daily En using an average coda amplitude from several regional earthquakes of M~5. A time-space distribution of En reveals a strong correlation between NVT energy release and 2006 SSE, which also replicates the two-phase character of this slow event and a migration of the slow slip maximum from North to South. There are also a few clear episodes of relatively high NVT energy release that do not correspond to any significant geodetic signal in GPS

  1. Nonvolcanic Tremor Activity is Highly Correlated With Slow Slip Events, Mexico

    Science.gov (United States)

    Kostoglodov, V.; Shapiro, N.; Larson, K. M.; Payero, J. S.; Husker, A.; Santiago, L. A.; Clayton, R. W.

    2008-12-01

    Significant activity of nonvolcanic tremor (NVT) has been observed in the central Mexico (Guerrero) subduction zone since 2001 when continuous seismic records became available. Although the quality of these records is poor, it is possible to estimate a temporal variation of energy in the range of 1-2Hz (best signal/noise ratio for the NVT). These clearly indicate a maximum of NVT energy release (En) during the 2001-2002 and 2006 large aseismic slow slip events (SSE) registered by the Guerrero GPS network. In particular En is higher for the 2001-2002 SSE which had larger surface displacements and extension than the 2006 SSE. A more detailed and accurate study of NVT activity was carried out using the data collected during the MASE experiment in Mexico. MASE consisted of 100 broad band seismometers in operation for ~2.5 years (2005-2007) along the profile oriented SSW-NNE from Acapulco, and crossing over the subduction zone for a distance of ~500 km. Epicenters and depths of individual tremor events determined using the envelope cross-correlation technique have rather large uncertainties, partly originated from the essentially 2D geometry of the network. The 'energy' approach is more efficient in this case because it provides an average NVT activity evolution in time and space. The data processing consists of a band pass (1-2Hz) filter of the raw 100 Hz sampled N-S component records, application a 10 min-width median filter to eliminate the effect of local seismic events and noise, and integration of the energy and normalization of daily En using an average coda amplitude from several regional earthquakes of M~5. A time-space distribution of En reveals a strong correlation between NVT energy release and the 2006 SSE, which also replicates the two-phase character of this slow event and a migration of the slow slip maximum from North to South. There are also a few clear episodes of relatively high NVT energy release that do not correspond to any significant geodetic

  2. Estimation of rank correlation for clustered data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Formal comment on: Myhrvold (2016) Dinosaur metabolism and the allometry of maximum growth rate. PLoS ONE; 11(11): e0163205.

    Science.gov (United States)

    Griebeler, Eva Maria; Werner, Jan

    2018-01-01

    In his 2016 paper, Myhrvold criticized ours from 2014 on maximum growth rates (Gmax, maximum gain in body mass observed within a time unit throughout an individual's ontogeny) and thermoregulation strategies (ectothermy, endothermy) of 17 dinosaurs. In our paper, we showed that Gmax values of similar-sized extant ectothermic and endothermic vertebrates overlap. This strongly questions a correct assignment of a thermoregulation strategy to a dinosaur only based on its Gmax and (adult) body mass (M). Contrary, Gmax separated similar-sized extant reptiles and birds (Sauropsida) and Gmax values of our studied dinosaurs were similar to those seen in extant similar-sized (if necessary scaled-up) fast growing ectothermic reptiles. Myhrvold examined two hypotheses (H1 and H2) regarding our study. However, we did neither infer dinosaurian thermoregulation strategies from group-wide averages (H1) nor were our results based on that Gmax and metabolic rate (MR) are related (H2). In order to assess whether single dinosaurian Gmax values fit to those of extant endotherms (birds) or of ectotherms (reptiles), we already used a method suggested by Myhrvold to avoid H1, and we only discussed pros and cons of a relation between Gmax and MR and did not apply it (H2). We appreciate Myhrvold's efforts in eliminating the correlation between Gmax and M in order to statistically improve vertebrate scaling regressions on maximum gain in body mass. However, we show here that his mass-specific maximum growth rate (kC) replacing Gmax (= MkC) does not model the expected higher mass gain in larger than in smaller species for any set of species. We also comment on, why we considered extant reptiles and birds as reference models for extinct dinosaurs and why we used phylogenetically-informed regression analysis throughout our study. Finally, we question several arguments given in Myhrvold in order to support his results.

  4. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  5. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  6. Quantum games with correlated noise

    International Nuclear Information System (INIS)

    Nawaz, Ahmad; Toor, A H

    2006-01-01

    We analyse quantum games with correlated noise through a generalized quantization scheme. Four different combinations on the basis of entanglement of initial quantum state and the measurement basis are analysed. It is shown that the quantum player only enjoys an advantage over the classical player when both the initial quantum state and the measurement basis are in entangled form. Furthermore, it is shown that for maximum correlation the effects of decoherence diminish and it behaves as a noiseless game

  7. A mesic maximum in biological water use demarcates biome sensitivity to aridity shifts.

    Science.gov (United States)

    Good, Stephen P; Moore, Georgianne W; Miralles, Diego G

    2017-12-01

    Biome function is largely governed by how efficiently available resources can be used and yet for water, the ratio of direct biological resource use (transpiration, E T ) to total supply (annual precipitation, P) at ecosystem scales remains poorly characterized. Here, we synthesize field, remote sensing and ecohydrological modelling estimates to show that the biological water use fraction (E T /P) reaches a maximum under mesic conditions; that is, when evaporative demand (potential evapotranspiration, E P ) slightly exceeds supplied precipitation. We estimate that this mesic maximum in E T /P occurs at an aridity index (defined as E P /P) between 1.3 and 1.9. The observed global average aridity of 1.8 falls within this range, suggesting that the biosphere is, on average, configured to transpire the largest possible fraction of global precipitation for the current climate. A unimodal E T /P distribution indicates that both dry regions subjected to increasing aridity and humid regions subjected to decreasing aridity will suffer declines in the fraction of precipitation that plants transpire for growth and metabolism. Given the uncertainties in the prediction of future biogeography, this framework provides a clear and concise determination of ecosystems' sensitivity to climatic shifts, as well as expected patterns in the amount of precipitation that ecosystems can effectively use.

  8. Maximum Aerobic Capacity of Underground Coal Miners in India

    Directory of Open Access Journals (Sweden)

    Ratnadeep Saha

    2011-01-01

    Full Text Available Miners fitness test was assessed in terms of determination of maximum aerobic capacity by an indirect method following a standard step test protocol before going down to mine by taking into consideration of heart rates (Telemetric recording and oxygen consumption of the subjects (Oxylog-II during exercise at different working rates. Maximal heart rate was derived as 220−age. Coal miners reported a maximum aerobic capacity within a range of 35–38.3 mL/kg/min. It also revealed that oldest miners (50–59 yrs had a lowest maximal oxygen uptake (34.2±3.38 mL/kg/min compared to (42.4±2.03 mL/kg/min compared to (42.4±2.03 mL/kg/min the youngest group (20–29 yrs. It was found to be negatively correlated with age (r=−0.55 and −0.33 for younger and older groups respectively and directly associated with the body weight of the subjects (r=0.57 – 0.68, P≤0.001. Carriers showed maximum cardio respiratory capacity compared to other miners. Indian miners VO2max was found to be lower both compared to their abroad mining counterparts and various other non-mining occupational working groups in India.

  9. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    International Nuclear Information System (INIS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-01-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics. (paper)

  10. Correlation of Periodontal Disease With Inflammatory Arthritis in the Time Before Modern Medical Intervention.

    Science.gov (United States)

    Rothschild, Bruce

    2017-03-01

    Controversy exists regarding possible correlation of periodontal disease with rheumatoid arthritis (RA) and ankylosing spondylitis (AS). Confounding factors may relate to stringency of inflammatory disease diagnosis and the effect of therapeutic intervention for RA on periodontal disease. These factors are investigated in this study. Forty-five individuals with documented RA (n = 15), spondyloarthropathy (n = 15), and calcium pyrophosphate deposition disease (CPPD) (n = 15), from the Hamann-Todd collection of human skeletons compiled from 1912 to 1938, and 15 individuals contemporarily incorporated in the collection were examined for tooth loss, cavity occurrence, average and maximum lingual and buccal depth of space between tooth and bone, periosteal reaction, serpentine bone resorption, abscess formation, and root penetration of the bone surface and analyzed by analysis of variance. Tooth loss was common, but actual number of teeth lost, cavity occurrence, average and maximum lingual and buccal depth of space between tooth and bone, periosteal reaction, serpentine grooving surrounding teeth (considered a sign of inflammation), abscess formation, and root exposure (penetration of bone surface) were indistinguishable among controls and individuals with RA, spondyloarthropathy, and CPPD. Although many factors can affect periodontal disease, presence of inflammatory arthritis does not appear to be one of them. The implication is that dental disease was common in the general population and not necessarily associated with arthritis, at least before the advent of modern rheumatologic medications. As specific diagnosis did not affect prevalence, perhaps current prevalence controversy may relate to current intervention, a subject for further study.

  11. Spin chain model for correlated quantum channels

    Energy Technology Data Exchange (ETDEWEB)

    Rossini, Davide [International School for Advanced Studies SISSA/ISAS, via Beirut 2-4, I-34014 Trieste (Italy); Giovannetti, Vittorio; Montangero, Simone [NEST-CNR-INFM and Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)], E-mail: monta@sns.it

    2008-11-15

    We analyze the quality of the quantum information transmission along a correlated quantum channel by studying the average fidelity between input and output states and the average output purity, giving bounds for the entropy of the channel. Noise correlations in the channel are modeled by the coupling of each channel use with an element of a one-dimensional interacting quantum spin chain. Criticality of the environment chain is seen to emerge in the changes of the fidelity and of the purity.

  12. Factors determining the average body size of geographically separated Arctodiaptomus salinus (Daday, 1885) populations

    OpenAIRE

    Anufriieva, Elena V.; Shadrin, Nickolai V.

    2014-01-01

    Arctodiaptomus salinus inhabits water bodies across Eurasia and North Africa. Based on our own data and that from the literature, we analyzed the influences of several factors on the intra- and inter-population variability of this species. A strong negative linear correlation between temperature and average body size in the Crimean and African populations was found, in which the parameters might be influenced by salinity. Meanwhile, asignificant negative correlation between female body size a...

  13. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  14. Approximation for maximum pressure calculation in containment of PWR reactors

    International Nuclear Information System (INIS)

    Souza, A.L. de

    1989-01-01

    A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt

  15. Analysis of photosynthate translocation velocity and measurement of weighted average velocity in transporting pathway of crops

    International Nuclear Information System (INIS)

    Ge Cailin; Luo Shishi; Gong Jian; Zhang Hao; Ma Fei

    1996-08-01

    The translocation profile pattern of 14 C-photosynthate along the transporting pathway in crops were monitored by pulse-labelling a mature leaf with 14 CO 2 . The progressive spreading of translocation profile pattern along the sheath or stem indicates that the translocation of photosynthate along the sheath or stem proceed with a range of velocities rather than with just a single velocity. The method for measuring the weighted average velocity of photosynthate translocation along the sheath or stem was established in living crops. The weighted average velocity and the maximum velocity of photosynthate translocation along the sheath in rice and maize were measured actually. (4 figs., 3 tabs.)

  16. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  17. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  18. An investigation of rugby scrimmaging posture and individual maximum pushing force.

    Science.gov (United States)

    Wu, Wen-Lan; Chang, Jyh-Jong; Wu, Jia-Hroung; Guo, Lan-Yuen

    2007-02-01

    Although rugby is a popular contact sport and the isokinetic muscle torque assessment has recently found widespread application in the field of sports medicine, little research has examined the factors associated with the performance of game-specific skills directly by using the isokinetic-type rugby scrimmaging machine. This study is designed to (a) measure and observe the differences in the maximum individual pushing forward force produced by scrimmaging in different body postures (3 body heights x 2 foot positions) with a self-developed rugby scrimmaging machine and (b) observe the variations in hip, knee, and ankle angles at different body postures and explore the relationship between these angle values and the individual maximum pushing force. Ten national rugby players were invited to participate in the examination. The experimental equipment included a self-developed rugby scrimmaging machine and a 3-dimensional motion analysis system. Our results showed that the foot positions (parallel and nonparallel foot positions) do not affect the maximum pushing force; however, the maximum pushing force was significantly lower in posture I (36% body height) than in posture II (38%) and posture III (40%). The maximum forward force in posture III (40% body height) was also slightly greater than for the scrum in posture II (38% body height). In addition, it was determined that hip, knee, and ankle angles under parallel feet positioning are factors that are closely negatively related in terms of affecting maximum pushing force in scrimmaging. In cross-feet postures, there was a positive correlation between individual forward force and hip angle of the rear leg. From our results, we can conclude that if the player stands in an appropriate starting position at the early stage of scrimmaging, it will benefit the forward force production.

  19. NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design

    Science.gov (United States)

    Borcherdt, Roger D.

    2015-01-01

    Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.

  20. The ancient Egyptian civilization: maximum and minimum in coincidence with solar activity

    Science.gov (United States)

    Shaltout, M.

    It is proved from the last 22 years observations of the total solar irradiance (TSI) from space by artificial satellites, that TSI shows negative correlation with the solar activity (sunspots, flares, and 10.7cm Radio emissions) from day to day, but shows positive correlations with the same activity from year to year (on the base of the annual average for each of them). Also, the solar constant, which estimated fromth ground stations for beam solar radiations observations during the 20 century indicate coincidence with the phases of the 11- year cycles. It is known from sunspot observations (250 years) , and from C14 analysis, that there are another long-term cycles for the solar activity larger than 11-year cycle. The variability of the total solar irradiance affecting on the climate, and the Nile flooding, where there is a periodicities in the Nile flooding similar to that of solar activity, from the analysis of about 1300 years of the Nile level observations atth Cairo. The secular variations of the Nile levels, regularly measured from the 7 toth 15 century A.D., clearly correlate with the solar variations, which suggests evidence for solar influence on the climatic changes in the East African tropics The civilization of the ancient Egyptian was highly correlated with the Nile flooding , where the river Nile was and still yet, the source of the life in the Valley and Delta inside high dry desert area. The study depends on long -time historical data for Carbon 14 (more than five thousands years), and chronical scanning for all the elements of the ancient Egyptian civilization starting from the firs t dynasty to the twenty six dynasty. The result shows coincidence between the ancient Egyptian civilization and solar activity. For example, the period of pyramids building, which is one of the Brilliant periods, is corresponding to maximum solar activity, where the periods of occupation of Egypt by Foreign Peoples corresponding to minimum solar activity. The decline

  1. High accurate volume holographic correlator with 4000 parallel correlation channels

    Science.gov (United States)

    Ni, Kai; Qu, Zongyao; Cao, Liangcai; Su, Ping; He, Qingsheng; Jin, Guofan

    2008-03-01

    Volume holographic correlator allows simultaneously calculate the two-dimensional inner product between the input image and each stored image. We have recently experimentally implemented in VHC 4000 parallel correlation channels with better than 98% output accuracy in a single location in a crystal. The speckle modulation is used to suppress the sidelobes of the correlation patterns, allowing more correlation spots to be contained in the output plane. A modified exposure schedule is designed to ensure the hologram in each channel with unity diffraction efficiency. In this schedule, a restricted coefficient was introduced into the original exposure schedule to solve the problem that the sensitivity and time constant of the crystal will change as a time function when in high-capacity storage. An interleaving method is proposed to improve the output accuracy. By unifying the distribution of the input and stored image patterns without changing the inner products between them, this method could eliminate the impact of correlation pattern variety on calculated inner product values. Moreover, by using this method, the maximum correlation spot size is reduced, which decreases the required minimum safe clearance between neighboring spots in the output plane, allowing more spots to be parallely detected without crosstalk. The experimental results are given and analyzed.

  2. Maximum And Minimum Temperature Trends In Mexico For The Last 31 Years

    Science.gov (United States)

    Romero-Centeno, R.; Zavala-Hidalgo, J.; Allende Arandia, M. E.; Carrasco-Mijarez, N.; Calderon-Bustamante, O.

    2013-05-01

    Based on high-resolution (1') daily maps of the maximum and minimum temperatures in Mexico, an analysis of the last 31-year trends is performed. The maps were generated using all the available information from more than 5,000 stations of the Mexican Weather Service (Servicio Meteorológico Nacional, SMN) for the period 1979-2009, along with data from the North American Regional Reanalysis (NARR). The data processing procedure includes a quality control step, in order to eliminate erroneous daily data, and make use of a high-resolution digital elevation model (from GEBCO), the relationship between air temperature and elevation by means of the average environmental lapse rate, and interpolation algorithms (linear and inverse-distance weighting). Based on the monthly gridded maps for the mentioned period, the maximum and minimum temperature trends calculated by least-squares linear regression and their statistical significance are obtained and discussed.

  3. Maximum temperature accounts for annual soil CO2 efflux in temperate forests of Northern China

    Science.gov (United States)

    Zhou, Zhiyong; Xu, Meili; Kang, Fengfeng; Jianxin Sun, Osbert

    2015-01-01

    It will help understand the representation legality of soil temperature to explore the correlations of soil respiration with variant properties of soil temperature. Soil temperature at 10 cm depth was hourly logged through twelve months. Basing on the measured soil temperature, soil respiration at different temporal scales were calculated using empirical functions for temperate forests. On monthly scale, soil respiration significantly correlated with maximum, minimum, mean and accumulated effective soil temperatures. Annual soil respiration varied from 409 g C m−2 in coniferous forest to 570 g C m−2 in mixed forest and to 692 g C m−2 in broadleaved forest, and was markedly explained by mean soil temperatures of the warmest day, July and summer, separately. These three soil temperatures reflected the maximum values on diurnal, monthly and annual scales. In accordance with their higher temperatures, summer soil respiration accounted for 51% of annual soil respiration across forest types, and broadleaved forest also had higher soil organic carbon content (SOC) and soil microbial biomass carbon content (SMBC), but a lower contribution of SMBC to SOC. This added proof to the findings that maximum soil temperature may accelerate the transformation of SOC to CO2-C via stimulating activities of soil microorganisms. PMID:26179467

  4. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  5. Correlation dimension of financial market

    Science.gov (United States)

    Nie, Chun-Xiao

    2017-05-01

    In this paper, correlation dimension is applied to financial data analysis. We calculate the correlation dimensions of some real market data and find that the dimensions are significantly smaller than those of the simulation data based on geometric Brownian motion. Based on the analysis of the Chinese and US stock market data, the main results are as follows. First, by calculating three data sets for the Chinese and US market, we find that large market volatility leads to a significant decrease in the dimensions. Second, based on 5-min stock price data, we find that the Chinese market dimension is significantly larger than the US market; this shows a significant difference between the two markets for high frequency data. Third, we randomly extract stocks from a stock set and calculate the correlation dimensions, and find that the average value of these dimensions is close to the dimension of the original set. In addition, we analyse the intuitional meaning of the relevant dimensions used in this paper, which are directly related to the average degree of the financial threshold network. The dimension measures the speed of the average degree that varies with the threshold value. A smaller dimension means that the rate of change is slower.

  6. Spatial correlations in compressible granular flows

    NARCIS (Netherlands)

    van Noije, T.P.C.; Ernst, M.H.; Brito, R.

    The clustering instability in freely evolving granular fluids manifests itself in the density-density correlation function and structure factor. These functions are calculated from fluctuating hydrodynamics. As time increases, the structure factor of density fluctuations develops a maximum, which

  7. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  8. Graph-theoretic approach to quantum correlations.

    Science.gov (United States)

    Cabello, Adán; Severini, Simone; Winter, Andreas

    2014-01-31

    Correlations in Bell and noncontextuality inequalities can be expressed as a positive linear combination of probabilities of events. Exclusive events can be represented as adjacent vertices of a graph, so correlations can be associated to a subgraph. We show that the maximum value of the correlations for classical, quantum, and more general theories is the independence number, the Lovász number, and the fractional packing number of this subgraph, respectively. We also show that, for any graph, there is always a correlation experiment such that the set of quantum probabilities is exactly the Grötschel-Lovász-Schrijver theta body. This identifies these combinatorial notions as fundamental physical objects and provides a method for singling out experiments with quantum correlations on demand.

  9. Relationship between maximum dynamic force of inferior members and body balance in strength training apprentices

    Directory of Open Access Journals (Sweden)

    Ariane Martins

    2010-08-01

    Full Text Available The relationship between force and balance show controversy results and has directimplications in exercise prescription practice. The objective was to investigate the relationshipbetween maximum dynamic force (MDF of inferior limbs and the static and dynamic balances.Participated in the study 60 individuals, with 18 to 24 years old, strength training apprentices.The MDF was available by mean the One Maximum Repetition (1MR in “leg press” and “kneeextension” and motor testes to available of static and dynamic balances. The correlation testsand multiple linear regression were applied. The force and balance variables showed correlationin females (p=0.038. The corporal mass and static balance showed correlation for the males(p=0.045. The explication capacity at MDF and practices time were small: 13% for staticbalance in males, 18% and 17%, respectively, for static and dynamic balance in females. Inconclusion: the MDF of inferior limbs showed low predictive capacity for performance in staticand dynamic balances, especially for males.

  10. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  11. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  12. Comparing daily temperature averaging methods: the role of surface and atmosphere variables in determining spatial and seasonal variability

    Science.gov (United States)

    Bernhardt, Jase; Carleton, Andrew M.

    2018-05-01

    The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.

  13. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  14. Measurement of the average B hadron lifetime in Z0 decays using reconstructed vertices

    International Nuclear Information System (INIS)

    Abe, K.; Abt, I.; Ahn, C.J.; Akagi, T.; Allen, N.J.; Ash, W.W.; Aston, D.; Baird, K.G.; Baltay, C.; Band, H.R.; Barakat, M.B.; Baranko, G.; Bardon, O.; Barklow, T.; Bazarko, A.O.; Ben-David, R.; Benvenuti, A.C.; Bilei, G.M.; Bisello, D.; Blaylock, G.; Bogart, J.R.; Bolton, T.; Bower, G.R.; Brau, J.E.; Breidenbach, M.; Bugg, W.M.; Burke, D.; Burnett, T.H.; Burrows, P.N.; Busza, W.; Calcaterra, A.; Caldwell, D.O.; Calloway, D.; Camanzi, B.; Carpinelli, M.; Cassell, R.; Castaldi, R.; Castro, A.; Cavalli-Sforza, M.; Church, E.; Cohn, H.O.; Coller, J.A.; Cook, V.; Cotton, R.; Cowan, R.F.; Coyne, D.G.; D'Oliveira, A.; Damerell, C.J.S.; Daoudi, M.; De Sangro, R.; De Simone, P.; Dell'Orso, R.; Dima, M.; Du, P.Y.C.; Dubois, R.; Eisenstein, B.I.; Elia, R.; Falciai, D.; Fan, C.; Fero, M.J.; Frey, R.; Furuno, K.; Gillman, T.; Gladding, G.; Gonzalez, S.; Hallewell, G.D.; Hart, E.L.; Hasegawa, Y.; Hedges, S.; Hertzbach, S.S.; Hildreth, M.D.; Huber, J.; Huffer, M.E.; Hughes, E.W.; Hwang, H.; Iwasaki, Y.; Jackson, D.J.; Jacques, P.; Jaros, J.; Johnson, A.S.; Johnson, J.R.; Johnson, R.A.; Junk, T.; Kajikawa, R.; Kalelkar, M.; Kang, H.J.; Karliner, I.; Kawahara, H.; Kendall, H.W.; Kim, Y.; King, M.E.; King, R.; Kofler, R.R.; Krishna, N.M.; Kroeger, R.S.; Labs, J.F.; Langston, M.; Lath, A.; Lauber, J.A.; Leith, D.W.G.S.; Liu, M.X.; Liu, X.; Loreti, M.; Lu, A.; Lynch, H.L.; Ma, J.; Mancinelli, G.; Manly, S.; Mantovani, G.; Markiewicz, T.W.; Maruyama, T.; Massetti, R.; Masuda, H.; Mazzucato, E.; McKemey, A.K.; Meadows, B.T.; Messner, R.; Mockett, P.M.; Moffeit, K.C.; Mours, B.; Mueller, G.; Muller, D.; Nagamine, T.; Nauenberg, U.; Neal, H.; Nussbaum, M.; Ohnishi, Y.; Osborne, L.S.; Panvini, R.S.; Park, H.; Pavel, T.J.; Peruzzi, I.; Piccolo, M.; Piemontese, L.; Pieroni, E.; Pitts, K.T.; Plano, R.J.; Prepost, R.; Prescott, C.Y.; Punkar, G.D.; Quigley, J.; Ratcliff, B.N.; Reeves, T.W.; Reidy, J.; Rensing, P.E.; Rochester, L.S.; Rothberg, J.E.; Rowson, P.C.; Russell, J.J.

    1995-01-01

    We report a measurement of the average B hadron lifetime using data collected with the SLD detector at the SLAC Linear Collider in 1993. An inclusive analysis selected three-dimensional vertices with B hadron lifetime information in a sample of 50x10 3 Z 0 decays. A lifetime of 1.564±0.030(stat)±0.036(syst) ps was extracted from the decay length distribution of these vertices using a binned maximum likelihood method. copyright 1995 The American Physical Society

  15. A hybrid correlation analysis with application to imaging genetics

    Science.gov (United States)

    Hu, Wenxing; Fang, Jian; Calhoun, Vince D.; Wang, Yu-Ping

    2018-03-01

    Investigating the association between brain regions and genes continues to be a challenging topic in imaging genetics. Current brain region of interest (ROI)-gene association studies normally reduce data dimension by averaging the value of voxels in each ROI. This averaging may lead to a loss of information due to the existence of functional sub-regions. Pearson correlation is widely used for association analysis. However, it only detects linear correlation whereas nonlinear correlation may exist among ROIs. In this work, we introduced distance correlation to ROI-gene association analysis, which can detect both linear and nonlinear correlations and overcome the limitation of averaging operations by taking advantage of the information at each voxel. Nevertheless, distance correlation usually has a much lower value than Pearson correlation. To address this problem, we proposed a hybrid correlation analysis approach, by applying canonical correlation analysis (CCA) to the distance covariance matrix instead of directly computing distance correlation. Incorporating CCA into distance correlation approach may be more suitable for complex disease study because it can detect highly associated pairs of ROI and gene groups, and may improve the distance correlation level and statistical power. In addition, we developed a novel nonlinear CCA, called distance kernel CCA, which seeks the optimal combination of features with the most significant dependence. This approach was applied to imaging genetic data from the Philadelphia Neurodevelopmental Cohort (PNC). Experiments showed that our hybrid approach produced more consistent results than conventional CCA across resampling and both the correlation and statistical significance were increased compared to distance correlation analysis. Further gene enrichment analysis and region of interest (ROI) analysis confirmed the associations of the identified genes with brain ROIs. Therefore, our approach provides a powerful tool for finding

  16. Spatially resolved vertical vorticity in solar supergranulation using helioseismology and local correlation tracking

    Science.gov (United States)

    Langfellner, J.; Gizon, L.; Birch, A. C.

    2015-09-01

    Flow vorticity is a fundamental property of turbulent convection in rotating systems. Solar supergranules exhibit a preferred sense of rotation, which depends on the hemisphere. This is due to the Coriolis force acting on the diverging horizontal flows. We aim to spatially resolve the vertical flow vorticity of the average supergranule at different latitudes, both for outflow and inflow regions. To measure the vertical vorticity, we use two independent techniques: time-distance helioseismology (TD) and local correlation tracking of granules in intensity images (LCT) using data from the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). Both maps are corrected for center-to-limb systematic errors. We find that 8 h TD and LCT maps of vertical vorticity are highly correlated at large spatial scales. Associated with the average supergranule outflow, we find tangential (vortical) flows that reach about 10 m s-1 in the clockwise direction at 40° latitude. In average inflow regions, the tangential flow reaches the same magnitude, but in the anticlockwise direction. These tangential velocities are much smaller than the radial (diverging) flow component (300 m s-1 for the average outflow and 200 m s-1 for the average inflow). The results for TD and LCT as measured from HMI are in excellent agreement for latitudes between -60° and 60°. From HMI LCT, we measure the vorticity peak of the average supergranule to have a full width at half maximum of about 13 Mm for outflows and 8 Mm for inflows. This is larger than the spatial resolution of the LCT measurements (about 3 Mm). On the other hand, the vorticity peak in outflows is about half the value measured at inflows (e.g., 4 × 10-6 s-1 clockwise compared to 8 × 10-6 s-1 anticlockwise at 40° latitude). Results from the Michelson Doppler Imager (MDI) on board the Solar and Heliospheric Observatory (SOHO) obtained in 2010 are biased compared to the HMI/SDO results for the same period

  17. Psychological Correlates of University Students' Academic Performance: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Richardson, Michelle; Abraham, Charles; Bond, Rod

    2012-01-01

    A review of 13 years of research into antecedents of university students' grade point average (GPA) scores generated the following: a comprehensive, conceptual map of known correlates of tertiary GPA; assessment of the magnitude of average, weighted correlations with GPA; and tests of multivariate models of GPA correlates within and across…

  18. Eigenstructures of MIMO Fading Channel Correlation Matrices and Optimum Linear Precoding Designs for Maximum Ergodic Capacity

    Directory of Open Access Journals (Sweden)

    Hamid Reza Bahrami

    2007-01-01

    Full Text Available The ergodic capacity of MIMO frequency-flat and -selective channels depends greatly on the eigenvalue distribution of spatial correlation matrices. Knowing the eigenstructure of correlation matrices at the transmitter is very important to enhance the capacity of the system. This fact becomes of great importance in MIMO wireless systems where because of the fast changing nature of the underlying channel, full channel knowledge is difficult to obtain at the transmitter. In this paper, we first investigate the effect of eigenvalues distribution of spatial correlation matrices on the capacity of frequency-flat and -selective channels. Next, we introduce a practical scheme known as linear precoding that can enhance the ergodic capacity of the channel by changing the eigenstructure of the channel by applying a linear transformation. We derive the structures of precoders using eigenvalue decomposition and linear algebra techniques in both cases and show their similarities from an algebraic point of view. Simulations show the ability of this technique to change the eigenstructure of the channel, and hence enhance the ergodic capacity considerably.

  19. Soil nematodes show a mid-elevation diversity maximum and elevational zonation on Mt. Norikura, Japan.

    Science.gov (United States)

    Dong, Ke; Moroenyane, Itumeleng; Tripathi, Binu; Kerfahi, Dorsaf; Takahashi, Koichi; Yamamoto, Naomichi; An, Choa; Cho, Hyunjun; Adams, Jonathan

    2017-06-08

    Little is known about how nematode ecology differs across elevational gradients. We investigated the soil nematode community along a ~2,200 m elevational range on Mt. Norikura, Japan, by sequencing the 18S rRNA gene. As with many other groups of organisms, nematode diversity showed a high correlation with elevation, and a maximum in mid-elevations. While elevation itself, in the context of the mid domain effect, could predict the observed unimodal pattern of soil nematode communities along the elevational gradient, mean annual temperature and soil total nitrogen concentration were the best predictors of diversity. We also found nematode community composition showed strong elevational zonation, indicating that a high degree of ecological specialization that may exist in nematodes in relation to elevation-related environmental gradients and certain nematode OTUs had ranges extending across all elevations, and these generalized OTUs made up a greater proportion of the community at high elevations - such that high elevation nematode OTUs had broader elevational ranges on average, providing an example consistent to Rapoport's elevational hypothesis. This study reveals the potential for using sequencing methods to investigate elevational gradients of small soil organisms, providing a method for rapid investigation of patterns without specialized knowledge in taxonomic identification.

  20. Flooding correlations in narrow channel

    International Nuclear Information System (INIS)

    Kim, S. H.; Baek, W. P.; Chang, S. H.

    1999-01-01

    Heat transfer in narrow gap is considered as important phenomena in severe accidents in nuclear power plants. Also in heat removal of electric chip. Critical heat flux(CHF) in narrow gap limits the maximum heat transfer rate in narrow channel. In case of closed bottom channel, flooding limited CHF occurrence is observed. Flooding correlations will be helpful to predict the CHF in closed bottom channel. In present study, flooding data for narrow channel geometry were collected and the work to recognize the effect of the span, w and gap size, s were performed. And new flooding correlations were suggested for high-aspect-ratio geometry. Also, flooding correlation was applied to flooding limited CHF data

  1. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  2. Reformers, Batting Averages, and Malpractice: The Case for Caution in Value-Added Use

    Science.gov (United States)

    Gleason, Daniel

    2014-01-01

    The essay considers two analogies that help to reveal the limitations of value-added modeling: the first, a comparison with batting averages, shows that the model's reliability is quite limited even though year-to-year correlation figures may seem impressive; the second, a comparison between medical malpractice and so-called educational…

  3. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  4. Robustness analysis of geodetic networks in the case of correlated observations

    Directory of Open Access Journals (Sweden)

    Mevlut Yetkin

    Full Text Available GPS (or GNSS networks are invaluable tools for monitoring natural hazards such as earthquakes. However, blunders in GPS observations may be mistakenly interpreted as deformation. Therefore, robust networks are needed in deformation monitoring using GPS networks. Robustness analysis is a natural merger of reliability and strain and defined as the ability to resist deformations caused by the maximum undetecle errors as determined from internal reliability analysis. However, to obtain rigorously correct results; the correlations among the observations must be considered while computing maximum undetectable errors. Therefore, we propose to use the normalized reliability numbers instead of redundancy numbers (Baarda's approach in robustness analysis of a GPS network. A simple mathematical relation showing the ratio between uncorrelated and correlated cases for maximum undetectable error is derived. The same ratio is also valid for the displacements. Numerical results show that if correlations among observations are ignored, dramatically different displacements can be obtained depending on the size of multiple correlation coefficients. Furthermore, when normalized reliability numbers are small, displacements get large, i.e., observations with low reliability numbers cause bigger displacements compared to observations with high reliability numbers.

  5. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  6. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  7. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  8. Wood density variations of Norway spruce (Picea abies (L. Karst. under contrasting climate conditions in southwestern Germany

    Directory of Open Access Journals (Sweden)

    Marieke van der Maaten-Theunissen

    2013-07-01

    Full Text Available We analyzed inter-annual variations in ring width and maximum wood density of Norway spruce (Picea abies (L. Karst. at different altitudes in Baden-Württemberg, southwestern Germany, to determine the climate response of these parameters under contrasting climate conditions. In addition, we compared maximum, average and minimum wood density between sites. Bootstrapped correlation coefficients of ring width and maximum wood density with monthly temperature and precipitation, revealed a different climate sensitivity of both parameters. Ring width showed strong correlations with climate variables in the previous year and in the first half of the growing season. Further, a negative relationship with summer temperature was observed at the low-altitude sites. Maximum wood density correlated best with temperature during the growing season, whereby strongest correlations were found between September temperature and maximum wood density at the high-altitude sites. Observed differences in maximum, average and minimum wood density are suggested to relate to the local climate; with lower temperatures and higher water availability having a negative effect on wood density. 

  9. Fourier analysis of spherically averaged momentum densities for some gaseous molecules

    International Nuclear Information System (INIS)

    Tossel, J.A.; Moore, J.H.

    1981-01-01

    The spherically averaged autocorrelation function, B(r), of the position-space wavefunction, psi(anti r), is calculated by numerical Fourier transformation from spherically averaged momentum densities, rho(p), obtained from either theoretical wavefunctions or (e,2e) electron-impact ionization experiments. Inspection of B(r) for the π molecular orbitals of C 4 H 6 established that autocorrelation function differences, ΔB(r), can be qualitatively related to bond lengths and numbers of bonding interactions. Differences between B(r) functions obtained from different approximate wavefunctions for a given orbital can be qualitatively understood in terms of wavefunction difference, Δpsi(1anti r), maps for these orbitals. Comparison of the B(r) function for the 1αsub(u) orbital of C 4 H 6 obtained from (e,2e) momentum densities with that obtained from an ab initio SCF MO wavefunction shows differences consistent with expected correlation effects. Thus, B(r) appears to be a useful quantity for relating spherically averaged momentum distributions to position-space wavefunction differences. (orig.)

  10. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  11. Dinosaur Metabolism and the Allometry of Maximum Growth Rate.

    Science.gov (United States)

    Myhrvold, Nathan P

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued.

  12. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    Science.gov (United States)

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued. PMID:27828977

  13. Factors determining the average body size of geographically separated Arctodiaptomus salinus (Daday, 1885) populations.

    Science.gov (United States)

    Anufriieva, Elena V; Shadrin, Nickolai V

    2014-03-01

    Arctodiaptomus salinus inhabits water bodies across Eurasia and North Africa. Based on our own data and that from the literature, we analyzed the influences of several factors on the intra- and inter-population variability of this species. A strong negative linear correlation between temperature and average body size in the Crimean and African populations was found, in which the parameters might be influenced by salinity. Meanwhile, a significant negative correlation between female body size and the altitude of habitats was found by comparing body size in populations from different regions. Individuals from environments with highly varying abiotic parameters, e.g. temporary reservoirs, had a larger body size than individuals from permanent water bodies. The changes in average body mass in populations were at 11.4 times, whereas, those in individual metabolic activities were at 6.2 times. Moreover, two size groups of A. salinus in the Crimean and the Siberian lakes were observed. The ratio of female length to male length fluctuated between 1.02 and 1.30. The average size of A. salinus in populations and its variations were determined by both genetic and environmental factors. However, the parities of these factors were unequal in either spatial or temporal scales.

  14. Radial behavior of the average local ionization energies of atoms

    International Nuclear Information System (INIS)

    Politzer, P.; Murray, J.S.; Grice, M.E.; Brinck, T.; Ranganathan, S.

    1991-01-01

    The radial behavior of the average local ionization energy bar I(r) has been investigated for the atoms He--Kr, using ab initio Hartree--Fock atomic wave functions. bar I(r) is found to decrease in a stepwise manner with the inflection points serving effectively to define boundaries between electronic shells. There is a good inverse correlation between polarizability and the ionization energy in the outermost region of the atom, suggesting that bar I(r) may be a meaningful measure of local polarizabilities in atoms and molecules

  15. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  16. Nonidentical particle correlations in STAR

    CERN Document Server

    Erazmus, B; Renault, G; Retière, F; Szarwas, P; 10.1556/APH.21.2004.2-4.33

    2004-01-01

    The correlation function of nonidentical particles is sensitive to the relative space-time asymmetries in particle emission. Analysing pion-kaon, pion-proton and kaon-proton correlation functions, measured in the Au+Au collisions by the STAR experiment at RHIC, we show that pions, kaons and protons are not emitted at the same average space-time coordinates. The shifts between pion, kaon and proton sources are consistent with the picture of a transverse collective flow. Results of the first measurement of proton-lambda correlations at STAR are in agreement with recent CERN and AGS data.

  17. CORRELATION BETWEEN PATHOLOGY AND EXCESS OF MAXIMUM CONCENTRATION LIMIT OF POLLUTANTS IN THE ENVIRONMENT OF THE REPUBLIC OF DAGESTAN

    Directory of Open Access Journals (Sweden)

    G. M. Abdurakhmanov

    2013-01-01

    Full Text Available Abstract. Statistical data from "Indicators of health status of the Republic of Dagestan" for 1999 - 2010 years are presented in the work. The aim of this work was to identify a cause-effect correlation between non-communicable diseases (ischemic heart disease, neuropsychiatric disease, endemic goiter, diabetes, congenital anomalies and environmental factors in the Republic of Dagestan.Statistical data processing was carried out using the software package Statistica, Microsoft Excel. The Spearman rank correlation coefficient (ρ was used for identify of correlation between indicators of environmental quality and health of population.Moderate positive correlation is observed between the development of pathology and excess of concentrations of contaminants in drinking water sources. Direct correlations are founded between development of the studied pathologies and excess of concentrations of heavy metals and their mobile forms in soils of the region. Direct correlation is found between excess of concentrations of heavy metals in the pasture vegetation (factorial character and the morbidity of the population (effective character.

  18. Ascertaining the uncertainty relations via quantum correlations

    International Nuclear Information System (INIS)

    Li, Jun-Li; Du, Kun; Qiao, Cong-Feng

    2014-01-01

    We propose a new scheme to express the uncertainty principle in the form of inequality of the bipartite correlation functions for a given multipartite state, which provides an experimentally feasible and model-independent way to verify various uncertainty and measurement disturbance relations. By virtue of this scheme, the implementation of experimental measurement on the measurement disturbance relation to a variety of physical systems becomes practical. The inequality in turn, also imposes a constraint on the strength of correlation, i.e. it determines the maximum value of the correlation function for two-body system and a monogamy relation of the bipartite correlation functions for multipartite system. (paper)

  19. Study on wavelength of maximum absorbance for phenyl- thiourea derivatives: A topological and non-conventional physicochemical approach

    International Nuclear Information System (INIS)

    Thakur, Suprajnya; Mishra, Ashutosh; Thakur, Mamta; Thakur, Abhilash

    2014-01-01

    In present study efforts have been made to analyze the role of different structural/ topological and non-conventional physicochemical features on the X-ray absorption property wavelength of maximum absorption λ m . Efforts are also made to compare the magnitude of various parameters for optimization of the features mainly responsible to characterize the wavelength of maximum absorbance λ m in X-ray absorption. For the purpose multiple linear regression method is used and on the basis of regression and correlation value suitable model have been developed.

  20. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  1. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  2. Distância interincisiva máxima em crianças respiradoras bucais Maximum interincisal distance in mouth breathing children

    Directory of Open Access Journals (Sweden)

    Débora Martins Cattoni

    2009-12-01

    Full Text Available INTRODUÇÃO: a distância interincisiva máxima é um importante aspecto na avaliação miofuncional orofacial, pois distúrbios miofuncionais orofaciais podem limitar a abertura da boca. OBJETIVO: mensurar a distância interincisiva máxima de crianças respiradoras bucais, relacionando-a com a idade, e comparar as médias dessas medidas com as médias dessa distância em crianças sem queixas fonoaudiológicas. MÉTODOS: participaram 99 crianças respiradoras bucais, de ambos os gêneros, com idades entre 7 anos e 11 anos e 11 meses, leucodermas, em dentadura mista. O grupo controle foi composto por 253 crianças, com idades entre 7 anos e 11 anos e 11 meses, leucodermas, em dentadura mista, sem queixas fonoaudiológicas. RESULTADOS: os achados evidenciam que a média das distâncias interincisivas máximas das crianças respiradoras bucais foi, no total da amostra, de 43,55mm, não apresentando diferença estatisticamente significativa entre as médias, segundo a idade. Não houve diferença estatisticamente significativa entre as médias da distância interincisiva máxima dos respiradores bucais e as médias dessa medida das crianças do grupo controle. CONCLUSÕES: a distância interincisiva máxima é uma medida que não variou nos respiradores bucais, durante a dentadura mista, segundo a idade, e parece não estar alterada em portadores desse tipo de disfunção. Aponta-se, também, a importância do uso do paquímetro na avaliação objetiva da distância interincisiva máxima.INTRODUCTION: The maximum interincisal distance is an important aspect in the orofacial myofunctional evaluation, because orofacial myofunctional disorders can limit the mouth opening. AIM: To describe the maximum interincisal distance of the mouth breathing children, according to age, and to compare the averages of the maximum interincisal distance of mouth breathing children to those of children with no history of speech-language pathology disorders. METHODS

  3. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  4. Consistency and asymptotic normality of maximum likelihood estimators of a multiplicative time-varying smooth transition correlation GARCH model

    DEFF Research Database (Denmark)

    Silvennoinen, Annestiina; Terasvirta, Timo

    A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations...

  5. Electron correlation energy in confined two-electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, C.L. [Chemistry Program, Centre College, 600 West Walnut Street, Danville, KY 40422 (United States); Montgomery, H.E., E-mail: ed.montgomery@centre.ed [Chemistry Program, Centre College, 600 West Walnut Street, Danville, KY 40422 (United States); Sen, K.D. [School of Chemistry, University of Hyderabad, Hyderabad 500 046 (India); Thompson, D.C. [Chemistry Systems and High Performance Computing, Boehringer Ingelheim Pharamaceuticals Inc., 900 Ridgebury Road, Ridgefield, CT 06877 (United States)

    2010-09-27

    Radial, angular and total correlation energies are calculated for four two-electron systems with atomic numbers Z=0-3 confined within an impenetrable sphere of radius R. We report accurate results for the non-relativistic, restricted Hartree-Fock and radial limit energies over a range of confinement radii from 0.05-10a{sub 0}. At small R, the correlation energies approach limiting values that are independent of Z while at intermediate R, systems with Z{>=}1 exhibit a characteristic maximum in the correlation energy resulting from an increase in the angular correlation energy which is offset by a decrease in the radial correlation energy.

  6. Dopant density from maximum-minimum capacitance ratio of implanted MOS structures

    International Nuclear Information System (INIS)

    Brews, J.R.

    1982-01-01

    For uniformly doped structures, the ratio of the maximum to the minimum high frequency capacitance determines the dopant ion density per unit volume. Here it is shown that for implanted structures this 'max-min' dopant density estimate depends upon the dose and depth of the implant through the first moment of the depleted portion of the implant. A a result, the 'max-min' estimate of dopant ion density reflects neither the surface dopant density nor the average of the dopant density over the depletion layer. In particular, it is not clear how this dopant ion density estimate is related to the flatband capacitance. (author)

  7. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  8. Relativistic quantum correlations in bipartite fermionic states

    Indian Academy of Sciences (India)

    The influences of relative motion, the size of the wave packet and the average momentum of the particles on different types of correlations present in bipartite quantum states are investigated. In particular, the dynamics of the quantum mutual information, the classical correlation and the quantum discord on the ...

  9. Maximum Recommended Dosage of Lithium for Pregnant Women Based on a PBPK Model for Lithium Absorption

    Directory of Open Access Journals (Sweden)

    Scott Horton

    2012-01-01

    Full Text Available Treatment of bipolar disorder with lithium therapy during pregnancy is a medical challenge. Bipolar disorder is more prevalent in women and its onset is often concurrent with peak reproductive age. Treatment typically involves administration of the element lithium, which has been classified as a class D drug (legal to use during pregnancy, but may cause birth defects and is one of only thirty known teratogenic drugs. There is no clear recommendation in the literature on the maximum acceptable dosage regimen for pregnant, bipolar women. We recommend a maximum dosage regimen based on a physiologically based pharmacokinetic (PBPK model. The model simulates the concentration of lithium in the organs and tissues of a pregnant woman and her fetus. First, we modeled time-dependent lithium concentration profiles resulting from lithium therapy known to have caused birth defects. Next, we identified maximum and average fetal lithium concentrations during treatment. Then, we developed a lithium therapy regimen to maximize the concentration of lithium in the mother’s brain, while maintaining the fetal concentration low enough to reduce the risk of birth defects. This maximum dosage regimen suggested by the model was 400 mg lithium three times per day.

  10. A Divergence Median-based Geometric Detector with A Weighted Averaging Filter

    Science.gov (United States)

    Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang

    2018-01-01

    To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.

  11. Measurement of the Depth of Maximum of Extensive Air Showers above 10^18 eV

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, J.; /Buenos Aires, CONICET; Abreu, P.; /Lisbon, IST; Aglietta, M.; /Turin U. /INFN, Turin; Ahn, E.J.; /Fermilab; Allard, D.; /APC, Paris; Allekotte, I.; /Centro Atomico Bariloche /Buenos Aires, CONICET; Allen, J.; /New York U.; Alvarez-Muniz, J.; /Santiago de Compostela U.; Ambrosio, M.; /Naples U.; Anchordoqui, L.; /Wisconsin U., Milwaukee; Andringa, S.; /Lisbon, IST /Boskovic Inst., Zagreb

    2010-02-01

    We describe the measurement of the depth of maximum, X{sub max}, of the longitudinal development of air showers induced by cosmic rays. Almost 4000 events above 10{sup 18} eV observed by the fluorescence detector of the Pierre Auger Observatory in coincidence with at least one surface detector station are selected for the analysis. The average shower maximum was found to evolve with energy at a rate of (106{sub -21}{sup +35}) g/cm{sup 2}/decade below 10{sup 18.24 {+-} 0.05}eV, and (24 {+-} 3) g/cm{sup 2}/decade above this energy. The measured shower-to-shower fluctuations decrease from about 55 to 26 g/cm{sup 2}. The interpretation of these results in terms of the cosmic ray mass composition is briefly discussed.

  12. High Grazing Angle and High Resolution Sea Clutter: Correlation and Polarisation Analyses

    Science.gov (United States)

    2007-03-01

    the azimuthal correlation. The correlation between the HH and VV sea clutter data is low. A CA-CFAR ( cell average constant false-alarm rate...to calculate the power spectra of correlation profiles. The frequency interval of the traditional Discrete Fourier Transform is NT1 Hz, where N and...sea spikes, the Entropy-Alpha decomposition of sea spikes is shown in Figure 30. The process first locates spikes using a cell -average constant false

  13. Correlation length estimation in a polycrystalline material model

    International Nuclear Information System (INIS)

    Simonovski, I.; Cizelj, L.

    2005-01-01

    This paper deals with the correlation length estimated from a mesoscopic model of a polycrystalline material. The correlation length can be used in some macroscopic material models as a material parameter that describes the internal length. It can be estimated directly from the strain and stress fields calculated from a finite-element model, which explicitly accounts for the selected mesoscopic features such as the random orientation, shape and size of the grains. A crystal plasticity material model was applied in the finite-element analysis. Different correlation lengths were obtained depending on the used set of crystallographic orientations. We determined that the different sets of crystallographic orientations affect the general level of the correlation length, however, as the external load is increased the behaviour of correlation length is similar in all the analyzed cases. The correlation lengths also changed with the macroscopic load. If the load is below the yield strength the correlation lengths are constant, and are slightly higher than the average grain size. The correlation length can therefore be considered as an indicator of first plastic deformations in the material. Increasing the load above the yield strength creates shear bands that temporarily increase the values of the correlation lengths calculated from the strain fields. With a further load increase the correlation lengths decrease slightly but stay above the average grain size. (author)

  14. Correlation methods in cutting arcs

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L; Kelly, H, E-mail: prevosto@waycom.com.ar [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina)

    2011-05-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  15. Correlation methods in cutting arcs

    International Nuclear Information System (INIS)

    Prevosto, L; Kelly, H

    2011-01-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  16. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  17. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  18. Maximum entropy networks are more controllable than preferential attachment networks

    International Nuclear Information System (INIS)

    Hou, Lvlin; Small, Michael; Lao, Songyang

    2014-01-01

    A maximum entropy (ME) method to generate typical scale-free networks has been recently introduced. We investigate the controllability of ME networks and Barabási–Albert preferential attachment networks. Our experimental results show that ME networks are significantly more easily controlled than BA networks of the same size and the same degree distribution. Moreover, the control profiles are used to provide insight into control properties of both classes of network. We identify and classify the driver nodes and analyze the connectivity of their neighbors. We find that driver nodes in ME networks have fewer mutual neighbors and that their neighbors have lower average degree. We conclude that the properties of the neighbors of driver node sensitively affect the network controllability. Hence, subtle and important structural differences exist between BA networks and typical scale-free networks of the same degree distribution. - Highlights: • The controllability of maximum entropy (ME) and Barabási–Albert (BA) networks is investigated. • ME networks are significantly more easily controlled than BA networks of the same degree distribution. • The properties of the neighbors of driver node sensitively affect the network controllability. • Subtle and important structural differences exist between BA networks and typical scale-free networks

  19. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  20. Pion correlation from Skyrmion--anti-Skyrmion annihilation

    International Nuclear Information System (INIS)

    Lu, Y.; Amado, R.D.

    1995-01-01

    We study two pion correlations from Skyrmion and anti-Skyrmion collision, using the product ansatz and an approximate random grooming method for nucleon projection. The spatial-isospin coupling inherent in the Skyrme model, along with empirical averages, leads to correlations not only among pions of like charges but also among unlike charge types

  1. A new method for the measurement of two-phase mass flow rate using average bi-directional flow tube

    International Nuclear Information System (INIS)

    Yoon, B. J.; Uh, D. J.; Kang, K. H.; Song, C. H.; Paek, W. P.

    2004-01-01

    Average bi-directional flow tube was suggested to apply in the air/steam-water flow condition. Its working principle is similar with Pitot tube, however, it makes it possible to eliminate the cooling system which is normally needed to prevent from flashing in the pressure impulse line of pitot tube when it is used in the depressurization condition. The suggested flow tube was tested in the air-water vertical test section which has 80mm inner diameter and 10m length. The flow tube was installed at 120 of L/D from inlet of test section. In the test, the pressure drop across the average bi-directional flow tube, system pressure and average void fraction were measured on the measuring plane. In the test, fluid temperature and injected mass flow rates of air and water phases were also measured by a RTD and two coriolis flow meters, respectively. To calculate the phasic mass flow rates : from the measured differential pressure and void fraction, Chexal drift-flux correlation was used. In the test a new correlation of momentum exchange factor was suggested. The test result shows that the suggested instrumentation using the measured void fraction and Chexal drift-flux correlation can predict the mass flow rates within 10% error of measured data

  2. SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS

    Directory of Open Access Journals (Sweden)

    HORVÁTH CS.

    2015-03-01

    Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.

  3. Wood density variations of Norway spruce (Picea abies (L. Karst. under contrasting climate conditions in southwestern Germany

    Directory of Open Access Journals (Sweden)

    Marieke van der Maaten-Theunissen

    2013-05-01

    Full Text Available We analyzed inter-annual variations in ring width and maximumwood density of Norway spruce (Picea abies (L. Karst. at different altitudes in Baden-Württemberg, southwestern Germany, to determine the climate response of these parameters under contrasting climate conditions. In addition, we compared maximum, average and minimum wood density between sites. Bootstrapped correlation coefficients of ring width and maximum wood density with monthly temperature and precipitation, revealed a different climate sensitivity of both parameters. Ring width showed strong correlations with climate variables in the previous year and in the first half of the growingseason. Further, a negative relationship with summer temperature was observed at the low-altitude sites. Maximum wood density correlated best with temperature during the growing season, whereby strongest correlations were found between September temperature and maximum wood density at the high-altitude sites. Observed differences in maximum, average and minimum wood density are suggested to relate to the local climate; with lower temperature and higher water availability having a negative effect on wood density.

  4. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  5. Signal averaging technique for noninvasive recording of late potentials in patients with coronary artery disease

    Science.gov (United States)

    Abboud, S.; Blatt, C. M.; Lown, B.; Graboys, T. B.; Sadeh, D.; Cohen, R. J.

    1987-01-01

    An advanced non invasive signal averaging technique was used to detect late potentials in two groups of patients: Group A (24 patients) with coronary artery disease (CAD) and without sustained ventricular tachycardia (VT) and Group B (8 patients) with CAD and sustained VT. Recorded analog data were digitized and aligned using a cross correlation function with fast Fourier transform schema, averaged and band pass filtered between 60 and 200 Hz with a non-recursive digital filter. Averaged filtered waveforms were analyzed by computer program for 3 parameters: (1) filtered QRS (fQRS) duration (2) interval between the peak of the R wave peak and the end of fQRS (R-LP) (3) RMS value of last 40 msec of fQRS (RMS). Significant change was found between Groups A and B in fQRS (101 -/+ 13 msec vs 123 -/+ 15 msec; p < .0005) and in R-LP vs 52 -/+ 11 msec vs 71-/+18 msec, p <.002). We conclude that (1) the use of a cross correlation triggering method and non-recursive digital filter enables a reliable recording of late potentials from the body surface; (2) fQRS and R-LP durations are sensitive indicators of CAD patients susceptible to VT.

  6. Correlative study of dynamic MRI and tumor angiogenesis in gastric carcinoma

    International Nuclear Information System (INIS)

    Tang Qunfeng; Shen Junkang; Feng Yizhong; Qian Minghui; Chai Yuhai

    2004-01-01

    Objective: To investigate the correlation between the dynamic MRI enhancement characteristics and tumor angiogenesis in gastric carcinoma. Methods: Histopathological slides of 30 patients underwent CD34 and vascular endothelial growth factor (VEGF) immunohistochemical staining. Microvessel density (MVD) and VEGF protein expression were analyzed with their relationship to pathological features. The dynamic MRI characteristics, including the maximum contrast enhancement ratio (CERmax), were correlatively studied with MVD and VEGF expression. Results: In 30 cases, MVD was 13.00 to 68.25 per vision field with an average of 42.95 ±14.79. The low expression rate of VEGF was 30% (9/30), while the high expression rate of VEGF was 70% (21/30). MVD and VEGF expression correlated with lymph node metastasis (P>0.05), but their relationships to the degree of differentiation and depth of invasion were not significant (P>0.05). MVD was related to TNM-staging of gastric carcinoma (P>0.05). The expression of VEGF between the stage I and IV had significant differences (P>0.05). MVD was higher in VEGF-high expression than in VEGF-low expression [(47.30 ± 14.16) per vision versus (32.81 ± 11.25) per vision]. CERmax was significantly correlated with MVD (r=0.556, P=0.0014). The distribution features and shape of microvessels within gastric carcinoma were related to the enhancement characteristics such as irregular enhancement and delaminated enhancement. The correlation between CERmax and expression of VEGF was not significant (t=-0.847, P=0.404). Conclusion: The manifestations on dynamic MR images can reflect the distribution features and shape of microvessels within gastric carcinoma. Dynamic MR imaging may prove to be a valuable means in estimating the MVD of gastric carcinoma noninvasively, and further predicting the biological behavior of gastric carcinoma and judging the prognosis. (authors)

  7. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  8. Force-induced unzipping of DNA with long-range correlated sequence

    OpenAIRE

    Allahverdyan, A. E.; Gevorkian, Zh. S.

    2002-01-01

    We consider force-induced unzipping transition for a heterogeneous DNA model with a long-range correlated base-sequence. It is shown that as compared to the uncorrelated situation, long-range correlations smear the unzipping phase-transition, change its universality class and lead to non-self-averaging: the averaged behavior strongly differs from the typical ones. Several basic scenarios for this typical behavior are revealed and explained. The results can be relevant for explaining the biolo...

  9. Correlations of Vegetative and Reproductive Characters with Root ...

    African Journals Online (AJOL)

    jummy

    the location had shown that rice plants require an average of 1.6 litres per plant per week at the maximum tillering, 2.4 .... ensuring simultaneous increase in tolerance to limiting soil water condition and improvement of grain yield. .... chambers and exposed to contrasting water deficit regimes II. Mapping quantitative trait loci.

  10. A study on correlation between 2D and 3D gamma evaluation metrics in patient-specific quality assurance for VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Rajasekaran, Dhanabalan, E-mail: dhanabalanraj@gmail.com; Jeevanandam, Prakash; Sukumar, Prabakar; Ranganathan, Arulpandiyan; Johnjothi, Samdevakumar; Nagarajan, Vivekanandan

    2014-01-01

    In this study, we investigated the correlation between 2-dimensional (2D) and 3D gamma analysis using the new PTW OCTAVIUS 4D system for various parameters. For this study, we selected 150 clinically approved volumetric-modulated arc therapy (VMAT) plans of head and neck (50), thoracic (esophagus) (50), and pelvic (cervix) (50) sites. Individual verification plans were created and delivered to the OCTAVIUS 4D phantom. Measured and calculated dose distributions were compared using the 2D and 3D gamma analysis by global (maximum), local and selected (isocenter) dose methods. The average gamma passing rate for 2D global gamma analysis in coronal and sagittal plane was 94.81% ± 2.12% and 95.19% ± 1.76%, respectively, for commonly used 3-mm/3% criteria with 10% low-dose threshold. Correspondingly, for the same criteria, the average gamma passing rate for 3D planar global gamma analysis was 95.90% ± 1.57% and 95.61% ± 1.65%. The volumetric 3D gamma passing rate for 3-mm/3% (10% low-dose threshold) global gamma was 96.49% ± 1.49%. Applying stringent gamma criteria resulted in higher differences between 2D planar and 3D planar gamma analysis across all the global, local, and selected dose gamma evaluation methods. The average gamma passing rate for volumetric 3D gamma analysis was 1.49%, 1.36%, and 2.16% higher when compared with 2D planar analyses (coronal and sagittal combined average) for 3 mm/3% global, local, and selected dose gamma analysis, respectively. On the basis of the wide range of analysis and correlation study, we conclude that there is no assured correlation or notable pattern that could provide relation between planar 2D and volumetric 3D gamma analysis. Owing to higher passing rates, higher action limits can be set while performing 3D quality assurance. Site-wise action limits may be considered for patient-specific QA in VMAT.

  11. Correlation function distributions in rapidity for pairs of π mesons in K-p interactions at 32 GeV/c

    International Nuclear Information System (INIS)

    Bumazhnov, V.A.; Babintsev, V.V.; Bogolyubskij, M.Yu.

    1983-01-01

    The inclusive and semiinclusiVe distributions of correlation functions in K - p-interactions at 32 GeV/c are presented as functions of rapidity. The positive short range correlations among rapidities of two charged pions reach maximUm in fragmentation ranges of incoming hadrons. The correlations become central and increase with rising of transverse momentum. Maximum values of correlations in the π + π - and π - π - systems oc in the region of negative and positive values of rasidity

  12. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  13. Predicting long-term average concentrations of traffic-related air pollutants using GIS-based information

    Science.gov (United States)

    Hochadel, Matthias; Heinrich, Joachim; Gehring, Ulrike; Morgenstern, Verena; Kuhlbusch, Thomas; Link, Elke; Wichmann, H.-Erich; Krämer, Ursula

    Global regression models were developed to estimate individual levels of long-term exposure to traffic-related air pollutants. The models are based on data of a one-year measurement programme including geographic data on traffic and population densities. This investigation is part of a cohort study on the impact of traffic-related air pollution on respiratory health, conducted at the westerly end of the Ruhr-area in North-Rhine Westphalia, Germany. Concentrations of NO 2, fine particle mass (PM 2.5) and filter absorbance of PM 2.5 as a marker for soot were measured at 40 sites spread throughout the study region. Fourteen-day samples were taken between March 2002 and March 2003 for each season and site. Annual average concentrations for the sites were determined after adjustment for temporal variation. Information on traffic counts in major roads, building densities and community population figures were collected in a geographical information system (GIS). This information was used to calculate different potential traffic-based predictors: (a) daily traffic flow and maximum traffic intensity of buffers with radii from 50 to 10 000 m and (b) distances to main roads and highways. NO 2 concentration and PM 2.5 absorbance were strongly correlated with the traffic-based variables. Linear regression prediction models, which involved predictors with radii of 50 to 1000 m, were developed for the Wesel region where most of the cohort members lived. They reached a model fit ( R2) of 0.81 and 0.65 for NO 2 and PM 2.5 absorbance, respectively. Regression models for the whole area required larger spatial scales and reached R2=0.90 and 0.82. Comparison of predicted values with NO 2 measurements at independent public monitoring stations showed a satisfactory association ( r=0.66). PM 2.5 concentration, however, was only slightly correlated and thus poorly predictable by traffic-based variables ( rGIS-based regression models offer a promising approach to assess individual levels of

  14. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  15. Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer

    Science.gov (United States)

    Lee, Jae Nyung

    2008-10-01

    Statistically significant climate responses to the solar variability are found in Northern Annular Mode (NAM) and in the tropical circulation. This study is based on the statistical analysis of numerical simulations with ModelE version of the chemistry coupled Goddard Institute for Space Studies (GISS) general circulation model (GCM) and National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The low frequency large scale variability of the winter and summer circulation is described by the NAM, the leading Empirical Orthogonal Function (EOF) of geopotential heights. The newly defined seasonal annular modes and its dynamical significance in the stratosphere and troposphere in the GISS ModelE is shown and compared with those in the NCEP/NCAR reanalysis. In the stratosphere, the summer NAM obtained from NCEP/NCAR reanalysis as well as from the ModelE simulations has the same sign throughout the northern hemisphere, but shows greater variability at low latitudes. The patterns in both analyses are consistent with the interpretation that low NAM conditions represent an enhancement of the seasonal difference between the summer and the annual averages of geopotential height, temperature and velocity distributions, while the reverse holds for high NAM conditions. Composite analysis of high and low NAM cases in both the model and observation suggests that the summer stratosphere is more "summer-like" when the solar activity is near a maximum. This means that the zonal easterly wind flow is stronger and the temperature is higher than normal. Thus increased irradiance favors a low summer NAM. A quantitative comparison of the anti-correlation between the NAM and the solar forcing is presented in the model and in the observation, both of which show lower/higher NAM index in solar maximum/minimum conditions. The summer NAM in the troposphere obtained from NCEP/NCAR reanalysis has a dipolar zonal structure with maximum

  16. Attosecond-correlated dynamics of two electrons in argon

    Indian Academy of Sciences (India)

    2014-01-11

    Jan 11, 2014 ... 2Max-Planck-Institut für Kernphysik, 69117 Heidelberg, Germany ... involving a highly correlated electronic transition state. ... laser is low, the recolliding electron can have a maximum energy of about 15 eV which.

  17. Bose-Einstein correlations between kaons

    International Nuclear Information System (INIS)

    Akesson, T.; Batley, R.; Breuker, H.; Dam, P.; Eidelman, S.; Fabian, C.W.; Frandsen, P.; Goerlach, U.; Heck, B.; Hilke, H.J.; Jeffreys, P.; Kalinovsky, A.; Kesseler, G.; Lans, J. van der; Lindsay, J.; Markou, A.; Mjoernmark, U.; Nielsen, B.S.; Olsen, L.H.; Rosselet, L.; Rosso, E.; Rudge, A.; Schindler, R.; Willis, W.J.; Witzeling, W.; Albrow, M.G.; Cockerill, D.; Evans, W.M.; Gibson, M.; Hiddleston, J.; MacCubbin, N.A.; Williamson, J.; Benary, O.; Dagan, S.; Lissauer, D.; Oren, Y.; Boeggild, H.; Botner, O.; Dahl-Jensen, E.; Dahl-Jensen, I.; Damgaard, G.; Hansen, K.H.; Hooper, J.; Moeller, R.; Brody, H.; Frankel, S.; Frati, W.; Molzon, W.; Vella, E.; Zajc, W.A.; Burkert, V.; Carter, J.R.; Cecil, P.; Chung, S.U.; Gordon, H.; Ludlam, T.; Winik, M.; Woody, C.; Cleland, W.E.; Kroeger, R.; Sullivan, M.; Thompson, J.A.

    1985-01-01

    Bose-Einstein correlations between identical charged kaons are observed in αα, pp, and panti p collisions at the CERN Intersecting Storage Rings. The average radial extension of the K-emitting region is found to be (2.4+-0.9) fm. (orig.)

  18. Non-identical particle correlations in STAR

    CERN Document Server

    Erazmus, B; Renault, G; Retière, F; Szarwas, P

    2004-01-01

    The correlation function of non-identical particles is sensitive to the relative space-time asymmetries in particle emission. Analysing pion kaon, pion-proton and kaon-proton correlation functions, measured in the Au+Au collisions by the STAR experiment at RHIC, we show that pions, kaons and protons are not emitted at the same average space-time coordinates. The shifts between pion, kaon and proton sources are consistent with the picture of a transverse collective flow. Results of the first measurement of proton-lambda correlations at STAR are in agreement with recent CERN and AGS data.

  19. Maximum total organic carbon limits at different DWPF melter feed maters (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1996-01-01

    The document presents information on the maximum total organic carbon (TOC) limits that are allowable in the DWPF melter feed without forming a potentially flammable vapor in the off-gas system were determined at feed rates varying from 0.7 to 1.5 GPM. At the maximum TOC levels predicted, the peak concentration of combustible gases in the quenched off-gas will not exceed 60 percent of the lower flammable limit during a 3X off-gas surge, provided that the indicated melter vapor space temperature and the total air supply to the melter are maintained. All the necessary calculations for this study were made using the 4-stage cold cap model and the melter off-gas dynamics model. A high-degree of conservatism was included in the calculational bases and assumptions. As a result, the proposed correlations are believed to by conservative enough to be used for the melter off-gas flammability control purposes

  20. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  1. The generalized correlation method for estimation of time delay in power plants

    International Nuclear Information System (INIS)

    Kostic, Lj.

    1981-01-01

    The generalized correlation estimation is developed for determining time delay between signals received at two spatially separated sensors in the presence of uncorrelated noise in a power plant. This estimator can be realized as a pair of receiver prefilters followed by a cross correlator. The time argument at which the correlator achieves a maximum is the delay estimate. (author)

  2. Reliability of Structural Systems with Correlated Elements

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard

    1982-01-01

    Calculation of the probability of failure of a system with correlation members is usually a difficult and time-consuming numerical problem. However, for some types of systems with equally correlated elements this calculation can be performed in a simple way. This has suggested two new methods bas...... on so-called average and equivalent correlation coefficients. By using these methods approximate values for the probability of failure can easily be calculated. The accuracy of these methods is illustrated with examples....

  3. MASEX '83, a survey of the turbidity maximum in the Weser Estuary

    International Nuclear Information System (INIS)

    Fanger, H.U.; Neumann, L.; Ohm, K.; Riethmueller, R.

    1986-01-01

    A one-week survey of the turbidity maximum in the Weser Estuary was conducted in the Fall of 1983 using the survey ship RV 'Victor Hensen'. Supplemental measurements were taken using in-situ current - conductivity - temperature - turbidity meters. The thickness of the bottom mud was determined using a gamma-ray transmission probe and compared with core sample analysis. The location of no-net tidal averaged bottom flow was determined to be at km 57. The off-ship measurements were taken using a CTD probe combined with a light attenuation meter. A comparison between salinity and attenuation gives insight into the relative importance of erosion, sedimentation and advective transport. (orig.) [de

  4. Correlation of gravestone decay and air quality 1960-2010

    Science.gov (United States)

    Mooers, H. D.; Carlson, M. J.; Harrison, R. M.; Inkpen, R. J.; Loeffler, S.

    2017-03-01

    Evaluation of spatial and temporal variability in surface recession of lead-lettered Carrara marble gravestones provides a quantitative measure of acid flux to the stone surfaces and is closely related to local land use and air quality. Correlation of stone decay, land use, and air quality for the period after 1960 when reliable estimates of atmospheric pollution are available is evaluated. Gravestone decay and SO2 measurements are interpolated spatially using deterministic and geostatistical techniques. A general lack of spatial correlation was identified and therefore a land-use-based technique for correlation of stone decay and air quality is employed. Decadally averaged stone decay is highly correlated with land use averaged spatially over an optimum radius of ≈7 km even though air quality, determined by records from the UK monitoring network, is not highly correlated with gravestone decay. The relationships among stone decay, air-quality, and land use is complicated by the relatively low spatial density of both gravestone decay and air quality data and the fact that air quality data is available only as annual averages and therefore seasonal dependence cannot be evaluated. However, acid deposition calculated from gravestone decay suggests that the deposition efficiency of SO2 has increased appreciably since 1980 indicating an increase in the SO2 oxidation process possibly related to reactions with ammonia.

  5. Qualitative pattern classification of shear wave elastography for breast masses: how it correlates to quantitative measurements.

    Science.gov (United States)

    Yoon, Jung Hyun; Ko, Kyung Hee; Jung, Hae Kyoung; Lee, Jong Tae

    2013-12-01

    To determine the correlation of qualitative shear wave elastography (SWE) pattern classification to quantitative SWE measurements and whether it is representative of quantitative SWE values with similar performances. From October 2012 to January 2013, 267 breast masses of 236 women (mean age: 45.12 ± 10.54 years, range: 21-88 years) who had undergone ultrasonography (US), SWE, and subsequent biopsy were included. US BI-RADS final assessment and qualitative and quantitative SWE measurements were recorded. Correlation between pattern classification and mean elasticity, maximum elasticity, elasticity ratio and standard deviation were evaluated. Diagnostic performances of grayscale US, SWE parameters, and US combined to SWE values were calculated and compared. Of the 267 breast masses, 208 (77.9%) were benign and 59 (22.1%) were malignant. Pattern classifications significantly correlated with all quantitative SWE measurements, showing highest correlation with maximum elasticity, r = 0.721 (P0.05). Pattern classification shows high correlation to maximum stiffness and may be representative of quantitative SWE values. When combined to grayscale US, SWE improves specificity of US. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Glycogen with short average chain length enhances bacterial durability

    Science.gov (United States)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  7. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  8. On the design of experimental separation processes for maximum accuracy in the estimation of their parameters

    International Nuclear Information System (INIS)

    Volkman, Y.

    1980-07-01

    The optimal design of experimental separation processes for maximum accuracy in the estimation of process parameters is discussed. The sensitivity factor correlates the inaccuracy of the analytical methods with the inaccuracy of the estimation of the enrichment ratio. It is minimized according to the design parameters of the experiment and the characteristics of the analytical method

  9. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  10. Effective capacity of multiple antenna channels: Correlation and keyhole

    KAUST Repository

    Zhong, Caijun; Ratnarajah, Tharm; Wong, Kaikit; Alouini, Mohamed-Slim

    2012-01-01

    In this study, the authors derive the effective capacity limits for multiple antenna channels which quantify the maximum achievable rate with consideration of link-layer delay-bound violation probability. Both correlated multiple-input single

  11. Portal pressure correlated to visceral circulation times

    Energy Technology Data Exchange (ETDEWEB)

    Friman, L [Serafimerlasarettet, Stockholm (Sweden)

    1979-01-01

    Visceral angiography was performed in 7 patients with normal portal pressure and in 10 with portal hypertension. Circulation times, size of vessels and portal pressure were determined. At celiac angiography, a direct correlation was found between time for maximum filling of portal vein and portal pressure, provided no vascular abnormalities existed. At superior mesenteric angiography such a correlation was not found; loss of flow by shunts in portal hypertension being one explanation. Portocaval shunts are common in the celiac system, but uncommon in the superior mesenteric system.

  12. Do Self-Regulated Processes such as Study Strategies and Satisfaction Predict Grade Point Averages for First and Second Generation College Students?

    Science.gov (United States)

    DiBenedetto, Maria K.

    2010-01-01

    The current investigation sought to determine whether self-regulatory variables: "study strategies" and "self-satisfaction" correlate with first and second generation college students' grade point averages, and to determine if these two variables would improve the prediction of their averages if used along with high school grades and SAT scores.…

  13. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  14. The Relationship Between Maximum Isometric Strength and Ball Velocity in the Tennis Serve

    Directory of Open Access Journals (Sweden)

    Baiget Ernest

    2016-12-01

    Full Text Available The aims of this study were to analyze the relationship between maximum isometric strength levels in different upper and lower limb joints and serve velocity in competitive tennis players as well as to develop a prediction model based on this information. Twelve male competitive tennis players (mean ± SD; age: 17.2 ± 1.0 years; body height: 180.1 ± 6.2 cm; body mass: 71.9 ± 5.6 kg were tested using maximum isometric strength levels (i.e., wrist, elbow and shoulder flexion and extension; leg and back extension; shoulder external and internal rotation. Serve velocity was measured using a radar gun. Results showed a strong positive relationship between serve velocity and shoulder internal rotation (r = 0.67; p < 0.05. Low to moderate correlations were also found between serve velocity and wrist, elbow and shoulder flexion – extension, leg and back extension and shoulder external rotation (r = 0.36 – 0.53; p = 0.377 – 0.054. Bivariate and multivariate models for predicting serve velocity were developed, with shoulder flexion and internal rotation explaining 55% of the variance in serve velocity (r = 0.74; p < 0.001. The maximum isometric strength level in shoulder internal rotation was strongly related to serve velocity, and a large part of the variability in serve velocity was explained by the maximum isometric strength levels in shoulder internal rotation and shoulder flexion.

  15. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  16. Temporal correlation functions of concentration fluctuations: an anomalous case.

    Science.gov (United States)

    Lubelski, Ariel; Klafter, Joseph

    2008-10-09

    We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.

  17. Development of quick-response area-averaged void fraction meter

    International Nuclear Information System (INIS)

    Watanabe, Hironori; Iguchi, Tadashi; Kimura, Mamoru; Anoda, Yoshinari

    2000-11-01

    Authors are performing experiments to investigate BWR thermal-hydraulic instability under coupling of neutronics and thermal-hydraulics. To perform the experiment, it is necessary to measure instantaneously area-averaged void fraction in rod bundle under high temperature/high pressure gas-liquid two-phase flow condition. Since there were no void fraction meters suitable for these requirements, we newly developed a practical void fraction meter. The principle of the meter is based on the electrical conductance changing with void fraction in gas-liquid two-phase flow. In this meter, metal flow channel wall is used as one electrode and a L-shaped line electrode installed at the center of flow channel is used as the other electrode. This electrode arrangement makes possible instantaneous measurement of area-averaged void fraction even under the metal flow channel. We performed experiments with air/water two-phase flow to clarify the void fraction meter performance. Experimental results indicated that void fraction was approximated by α=1-I/I o , where α and I are void fraction and current (I o is current at α=0). This relation holds in the wide range of void fraction of 0∼70%. The difference between α and 1-I/I o was approximately 10% at maximum. The major reasons of the difference are a void distribution over measurement area and an electrical insulation of the center electrode by bubbles. The principle and structure of this void fraction meter are very basic and simple. Therefore, the meter can be applied to various fields on gas-liquid two-phase flow studies. (author)

  18. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  19. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  20. Is the poleward migration of tropical cyclone maximum intensity associated with a poleward migration of tropical cyclone genesis?

    Science.gov (United States)

    Daloz, Anne Sophie; Camargo, Suzana J.

    2018-01-01

    A recent study showed that the global average latitude where tropical cyclones achieve their lifetime-maximum intensity has been migrating poleward at a rate of about one-half degree of latitude per decade over the last 30 years in each hemisphere. However, it does not answer a critical question: is the poleward migration of tropical cyclone lifetime-maximum intensity associated with a poleward migration of tropical cyclone genesis? In this study we will examine this question. First we analyze changes in the environmental variables associated with tropical cyclone genesis, namely entropy deficit, potential intensity, vertical wind shear, vorticity, skin temperature and specific humidity at 500 hPa in reanalysis datasets between 1980 and 2013. Then, a selection of these variables is combined into two tropical cyclone genesis indices that empirically relate tropical cyclone genesis to large-scale variables. We find a shift toward greater (smaller) average potential number of genesis at higher (lower) latitudes over most regions of the Pacific Ocean, which is consistent with a migration of tropical cyclone genesis towards higher latitudes. We then examine the global best track archive and find coherent and significant poleward shifts in mean genesis position over the Pacific Ocean basins.

  1. Bose-Einstein correlation in Landau's model

    International Nuclear Information System (INIS)

    Hama, Y.; Padula, S.S.

    1986-01-01

    Bose-Einstein correlation is studied by taking an expanding fluid given by Landau's model as the source, where each space-time point is considered as an independent and chaotic emitting center with Planck's spectral distribution. As expected, the correlation depends on the relative angular positions as well as on the overall localization of the measuring system and it turns out that the average dimension of the source increases with the multiplicity N/sub ch/

  2. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  3. Influence of Maximum Inbreeding Avoidance under BLUP EBV Selection on Pinzgau Population Diversity

    Directory of Open Access Journals (Sweden)

    Radovan Kasarda

    2011-05-01

    Full Text Available Evaluated was effect of mating (random vs. maximum avoidance of inbreeding under BLUP EBV selection strategy. Existing population structure was under Monte Carlo stochastic simulation analyzed from the point to minimize increase of inbreeding. Maximum avoidance of inbreeding under BLUP selection resulted into comparable increase of inbreeding then random mating in average of 10 generation development. After 10 generations of simulation of mating strategy was observed ΔF= 6,51 % (2 sires, 5,20 % (3 sires, 3,22 % (4 sires resp. 2,94 % (5 sires. With increased number of sires selected, decrease of inbreeding was observed. With use of 4, resp. 5 sires increase of inbreeding was comparable to random mating with phenotypic selection. For saving of genetic diversity and prevention of population loss is important to minimize increase of inbreeding in small populations. Classical approach was based on balancing ratio of sires and dams in mating program. Contrariwise in the most of commercial populations small number of sires was used with high mating ratio.

  4. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  5. correlation between sunshine hours and climatic parameters at four

    African Journals Online (AJOL)

    Mgina

    A multiple regression technique was used to assess the correlation between sunshine hours and maximum and ... solar radiation depends on the model and the climatic parameter used. ..... A stochastic Markov chain model for simulating wind ...

  6. The use of the average plutonium-content for criticality evaluation of boiling water reactor mixed oxide-fuel transport and storage packages

    International Nuclear Information System (INIS)

    Mattera, C.

    2003-01-01

    Currently in France, criticality studies in transport configurations for Boiling Water Reactor Mixed Oxide fuel assemblies are based on conservative hypothesis assuming that all rods (Mixed Oxide (Uranium and Plutonium), Uranium Oxide, Uranium and (Gadolinium Oxide rods) are Mixed Oxide rods with the same Plutonium-content, corresponding to the maximum value. In that way, the real heterogeneous mapping of the assembly is masked and covered by an homogenous Plutonium-content assembly, enriched at the maximum value. As this calculation hypothesis is extremely conservative, Cogema Logistics (formerly Transnucleaire) has studied a new calculation method based on the use of the average Plutonium-content in the criticality studies. The use of the average Plutonium-content instead of the real Plutonium-content profiles provides a highest reactivity value that makes it globally conservative. This method can be applied for all Boiling Water Reactor Mixed Oxide complete fuel assemblies of type 8 x 8, 9 x 9 and 10 x 10 which Plutonium-content in mass weight does not exceed 15%; it provides advantages which are discussed in the paper. (author)

  7. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  8. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  9. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  10. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system.

    Science.gov (United States)

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R

    2013-07-01

    The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, pchest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.

  11. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  12. Correlation of Space Shuttle Landing Performance with Post-Flight Cardiovascular Dysfunction

    Science.gov (United States)

    McCluskey, R.

    2004-01-01

    Introduction: Microgravity induces cardiovascular adaptations resulting in orthostatic intolerance on re-exposure to normal gravity. Orthostasis could interfere with performance of complex tasks during the re-entry phase of Shuttle landings. This study correlated measures of Shuttle landing performance with post-flight indicators of orthostatic intolerance. Methods: Relevant Shuttle landing performance parameters routinely recorded at touchdown by NASA included downrange and crossrange distances, airspeed, and vertical speed. Measures of cardiovascular changes were calculated from operational stand tests performed in the immediate post-flight period on mission commanders from STS-41 to STS-66. Stand test data analyzed included maximum standing heart rate, mean increase in maximum heart rate, minimum standing systolic blood pressure, and mean decrease in standing systolic blood pressure. Pearson correlation coefficients were calculated with the null hypothesis that there was no statistically significant linear correlation between stand test results and Shuttle landing performance. A correlation coefficient? 0.5 with a pcorrelations between landing performance and measures of post-flight cardiovascular dysfunction. Discussion: There was no evidence that post-flight cardiovascular stand test data correlated with Shuttle landing performance. This implies that variations in landing performance were not due to space flight-induced orthostatic intolerance.

  13. Portal pressure correlated to visceral circulation times

    International Nuclear Information System (INIS)

    Friman, L.

    1979-01-01

    Visceral angiography was performed in 7 patients with normal portal pressure and in 10 with portal hypertension. Circulation times, size of vessels and portal pressure were determined. At celiac angiography, a direct correlation was found between time for maximum filling of portal vein and portal pressure, provided no vascular abnormalities existed. At superior mesenteric angiography such a correlation was not found; loss of flow by shunts in portal hypertension being one explanation. Portocaval shunts are common in the celiac system, but uncommon in the superior mesenteric system. (Auth.)

  14. Eddy correlation measurements of oxygen uptake in deep ocean sediments

    DEFF Research Database (Denmark)

    Berg, P.; Glud, Ronnie Nøhr; Hume, A.

    2010-01-01

    .62 +/- 0.23 (SE, n = 7), 1.65 +/- 0.33 (n = 2), and 1.43 +/- 0.15 (n = 25) mmol m(-2) d(-1). The very good agreement between the eddy correlation flux and the chamber flux serves as a new, important validation of the eddy correlation technique. It demonstrates that the eddy correlation instrumentation......Abstract: We present and compare small sediment-water fluxes of O-2 determined with the eddy correlation technique, with in situ chambers, and from vertical sediment microprofiles at a 1450 m deep-ocean site in Sagami Bay, Japan. The average O-2 uptake for the three approaches, respectively, was 1...... available today is precise and can resolve accurately even very small benthic O-2 fluxes. The correlated fluctuations in vertical velocity and O-2 concentration that give the eddy flux had average values of 0.074 cm s(-1) and 0.049 mu M. The latter represents only 0.08% of the 59 mu M mean O-2 concentration...

  15. Quantum correlations for bipartite continuous-variable systems

    Science.gov (United States)

    Ma, Ruifen; Hou, Jinchuan; Qi, Xiaofei; Wang, Yangyang

    2018-04-01

    Two quantum correlations Q and Q_P for (m+n)-mode continuous-variable systems are introduced in terms of average distance between the reduced states under the local Gaussian positive operator-valued measurements, and analytical formulas of these quantum correlations for bipartite Gaussian states are provided. It is shown that the product states do not contain these quantum correlations, and conversely, all (m+n)-mode Gaussian states with zero quantum correlations are product states. Generally, Q≥ Q_{P}, but for the symmetric two-mode squeezed thermal states, these quantum correlations are the same and a computable formula is given. In addition, Q is compared with Gaussian geometric discord for symmetric squeezed thermal states.

  16. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  17. INTERDEPENDENCE BETWEEN DRY DAYS AND TEMPERATURE OF SYLHET REGION: CORRELATION ANALYSIS

    Directory of Open Access Journals (Sweden)

    Syed Mustakim Ali Shah

    2016-01-01

    Full Text Available Climate change can have profound impact on weather conditions around the world such as heavy rainfall, drought, global warming and so on. Understanding and predicting these natural variations is now a key research challenge for disaster-prone country like Bangladesh. This study focuses on the north eastern part of Bangladesh which is a hilly region, plays an important role in the ecological balance of the country along with socio-economic development. Present study analyses the behavior of maximum temperature and dry days using different statistical tools. Pearson’s correlation matrix and Man-Kendall’s tau are used to correlate monthly dry days with monthly maximum temperature, and also their annual trend. A moderate correlation was found mostly in dry summer months. In addition, a positive trend was observed in Man Kendall’s trend test of yearly temperature which might be an indication of global warming in this region.

  18. Evaluation of adaptation to visually induced motion sickness based on the maximum cross-correlation between pulse transmission time and heart rate

    Directory of Open Access Journals (Sweden)

    Chiba Shigeru

    2007-09-01

    Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.

  19. ACCUMULATION OF SELECTED METALS IN UMBILICAL CORD BLOOD OF NULLIPAROUS AND MULTIPAROUS WOMEN AND CORRELATION WITH THE NEWBORN´S PARAMETERS

    Directory of Open Access Journals (Sweden)

    Iwona Kozikowska

    2012-08-01

    Full Text Available The aim of this study was to determine the content of magnesium, copper, cadmium and iron in umbilical cord blood of the newborns depending on the number of pregnancies. It was established correlations between average concentrations these metals in cord blood and newborn’s parameters. The study material was collected immediately after delivery from the Department of Obstetrics and Gynecology in Bytom. The cord blood was taken from 99 women between 29-40 years old. Women were divided into two groups: nulliparous and multiparous. The concentration of metals in the cord blood was determined by flame atomic absorption spectrometry (FAAS. The conducted study demonstrates that magnesium, copper, cadmium and iron were noted in all tissues, both nulliparous women and multiparous mothers. The maximum concentration of cadmium in umbilical cord blood was observed among multiparous mothers (2.229 mg.kg-1 d.m.. In group of nulliparous women was observed higher concentration of Fe, Mg and Cu in umbilical cord blood than in multiparous mothers. It was noted some statistically significant correlation between iron, copper and newborn’s parameters. Parity influences the concentration of cadmium in umbilical cord blood with higher level found in multiparous women. Average content of iron in cord blood did not decrease with parity, it indicate that this element is preferentially taken up by the child.

  20. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  1. Disentangling the effects of alternation rate and maximum run length on judgments of randomness

    Directory of Open Access Journals (Sweden)

    Sabine G. Scholl

    2011-08-01

    Full Text Available Binary sequences are characterized by various features. Two of these characteristics---alternation rate and run length---have repeatedly been shown to influence judgments of randomness. The two characteristics, however, have usually been investigated separately, without controlling for the other feature. Because the two features are correlated but not identical, it seems critical to analyze their unique impact, as well as their interaction, so as to understand more clearly what influences judgments of randomness. To this end, two experiments on the perception of binary sequences orthogonally manipulated alternation rate and maximum run length (i.e., length of the longest run within the sequence. Results show that alternation rate consistently exerts a unique effect on judgments of randomness, but that the effect of alternation rate is contingent on the length of the longest run within the sequence. The effect of maximum run length was found to be small and less consistent. Together, these findings extend prior randomness research by integrating literature from the realms of perception, categorization, and prediction, as well as by showing the unique and joint effects of alternation rate and maximum run length on judgments of randomness.

  2. RELIABILITY OF THE ONE REPETITION-MAXIMUM POWER CLEAN TEST IN ADOLESCENT ATHLETES

    Science.gov (United States)

    Faigenbaum, Avery D.; McFarland, James E.; Herman, Robert; Naclerio, Fernando; Ratamess, Nicholas A.; Kang, Jie; Myer, Gregory D.

    2013-01-01

    Although the power clean test is routinely used to assess strength and power performance in adult athletes, the reliability of this measure in younger populations has not been examined. Therefore, the purpose of this study was to determine the reliability of the one repetition maximum (1 RM) power clean in adolescent athletes. Thirty-six male athletes (age 15.9 ± 1.1 yrs, body mass 79.1 ± 20.3 kg, height 175.1 ±7.4 cm) who had more than 1 year of training experience with weightlifting exercises performed a 1 RM power clean on two nonconsecutive days in the afternoon following standardized procedures. All test procedures were supervised by a senior level weightlifting coach and consisted of a systematic progression in test load until the maximum resistance that could be lifted for one repetition using proper exercise technique was determined. Data were analyzed using an intraclass correlation coefficient (ICC [2,k]), Pearson correlation coefficient (r), repeated measures ANOVA, Bland-Altman plot, and typical error analyses. Analysis of the data revealed that the test measures were highly reliable demonstrating a test-retest ICC of 0.98 (95% CI = 0.96–0.99). Testing also demonstrated a strong relationship between 1 RM measures on trial 1 and trial 2 (r=0.98, pinjuries occurred during the study period and the testing protocol was well-tolerated by all subjects. These findings indicate that 1 RM power clean testing has a high degree of reproducibility in trained male adolescent athletes when standardized testing procedures are followed and qualified instruction is present. PMID:22233786

  3. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  4. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  5. Angular correlations and high energy evolution

    International Nuclear Information System (INIS)

    Kovner, Alex; Lublinsky, Michael

    2011-01-01

    We address the question of to what extent JIMWLK evolution is capable of taking into account angular correlations in a high energy hadronic wave function. Our conclusion is that angular (and indeed other) correlations in the wave function cannot be reliably calculated without taking into account Pomeron loops in the evolution. As an example we study numerically the energy evolution of angular correlations between dipole scattering amplitudes in the framework of the large N c approximation to JIMWLK evolution (the 'projectile dipole model'). Target correlations are introduced via averaging over an (isotropic) ensemble of anisotropic initial conditions. We find that correlations disappear very quickly with rapidity even inside the saturation radius. This is in accordance with our physical picture of JIMWLK evolution. The actual correlations inside the saturation radius in the target QCD wave function, on the other hand, should remain sizable at any rapidity.

  6. Correlation of Hydronephrosis Index to Society of Fetal Urology Hydronephrosis Scale

    Directory of Open Access Journals (Sweden)

    Krishnan Venkatesan

    2009-01-01

    Full Text Available Purpose. We seek to correlate conventional hydronephrosis (HN grade and hydronephrosis index (HI. Methods. We examined 1207 hydronephrotic kidneys by ultrasound. HN was classified by Society of Fetal Urology guidelines. HN was then gauged using HI, a reproducible, standardized, and dimensionless measurement of renal area. We then calculated average HI for each HN grade. Results. Comparing HI to standard SFU HN grade, average HI is 89.3 for grade I; average HI is 83.9 for grade II; average HI is 73.0 for grade III; average HI is 54.6 for SFU grade IV. Conclusions. HI correlates well with SFU HN grade. The HI serves as a quantitative measure of HN. HI can be used to track HN over time. Versus conventional grading, HI may be more sensitive in defining severe (grades III and IV HN, and in indicating resolving, stable, or worsening HN, thus providing more information for clinical decision-making and HN management.

  7. A novel fast phase correlation algorithm for peak wavelength detection of Fiber Bragg Grating sensors.

    Science.gov (United States)

    Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F

    2014-03-24

    Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.

  8. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  9. Measurement of in-bore side loads and comparison to first maximum yaw

    Directory of Open Access Journals (Sweden)

    Donald E. Carlucci

    2016-04-01

    Full Text Available In-bore yaw of a projectile in a gun tube has been shown to result in range loss if the yaw is significant. An attempt was made to determine if relationships between in-bore yaw and projectile First Maximum Yaw (FMY were observable. Experiments were conducted in which pressure transducers were mounted near the muzzle of a 155 mm cannon in three sets of four. Each set formed a cruciform pattern to obtain a differential pressure across the projectile. These data were then integrated to form a picture of what the overall pressure distribution was along the side of the projectile. The pressure distribution was used to determine a magnitude and direction of the overturning moment acting on the projectile. This moment and its resulting angular acceleration were then compared to the actual first maximum yaw observed in the test. The degree of correlation was examined using various statistical techniques. Overall uncertainty in the projectile dynamics was between 20% and 40% of the mean values of FMY.

  10. A study of Solar-Enso correlation with southern Brazil tree ring index (1955- 1991)

    Science.gov (United States)

    Rigozo, N.; Nordemann, D.; Vieira, L.; Echer, E.

    The effects of solar activity and El Niño-Southern Oscillation on tree growth in Southern Brazil were studied by correlation analysis. Trees for this study were native Araucaria (Araucaria Angustifolia)from four locations in Rio Grande do Sul State, in Southern Brazil: Canela (29o18`S, 50o51`W, 790 m asl), Nova Petropolis (29o2`S, 51o10`W, 579 m asl), Sao Francisco de Paula (29o25`S, 50o24`W, 930 m asl) and Sao Martinho da Serra (29o30`S, 53o53`W, 484 m asl). From these four sites, an average tree ring Index for this region was derived, for the period 1955-1991. Linear correlations were made on annual and 10 year running averages of this tree ring Index, of sunspot number Rz and SOI. For annual averages, the correlation coefficients were low, and the multiple regression between tree ring and SOI and Rz indicates that 20% of the variance in tree rings was explained by solar activity and ENSO variability. However, when the 10 year running averages correlations were made, the coefficient correlations were much higher. A clear anticorrelation is observed between SOI and Index (r=-0.81) whereas Rz and Index show a positive correlation (r=0.67). The multiple regression of 10 year running averages indicates that 76% of the variance in tree ring INdex was explained by solar activity and ENSO. These results indicate that the effects of solar activity and ENSO on tree rings are better seen on long timescales.

  11. Quantifying meta-correlations in financial markets

    Science.gov (United States)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  12. Streamwise evolution of statistical events and the triple correlation in a model wind turbine array

    Science.gov (United States)

    Viestenz, Kyle; Cal, Raúl Bayoán

    2013-11-01

    Hot-wire anemometry data, obtained from a wind tunnel experiment containing a 3 × 3 wind turbine array, are used to conditionally average the Reynolds stresses. Nine profiles at the centerline behind the array are analyzed to characterize the turbulent velocity statistics of the wake flow. Quadrant analysis yields statistical events occurring in the wake of the wind farm, where quadrants 2 and 4 produce ejections and sweeps, respectively. A balance between these quadrants is expressed via the ΔSo parameter, which attains a maximum value at the bottom tip and changes sign near the top tip of the rotor. These are then associated to the triple correlation term present in the turbulent kinetic energy equation of the fluctuations. The development of these various quantities is assessed in light of wake remediation, energy transport and possess significance in closure models. National Science Foundation: ECCS-1032647.

  13. Measurement of the Maximum Frequency of Electroglottographic Fluctuations in the Expiration Phase of Volitional Cough as a Functional Test for Cough Efficiency.

    Science.gov (United States)

    Iwahashi, Toshihiko; Ogawa, Makoto; Hosokawa, Kiyohito; Kato, Chieri; Inohara, Hidenori

    2017-10-01

    The hypotheses of the present study were that the maximum frequency of fluctuation of electroglottographic (EGG) signals in the expiration phase of volitional cough (VC) reflects the cough efficiency and that this EGG parameter is affected by impaired laryngeal closure, expiratory effort strength, and gender. For 20 normal healthy adults and 20 patients diagnosed with unilateral vocal fold paralysis (UVFP), each participant was fitted with EGG electrodes on the neck, had a transnasal laryngo-fiberscope inserted, and was asked to perform weak/strong VC tasks while EGG signals and a high-speed digital image of the larynx were recorded. The maximum frequency was calculated in the EGG fluctuation region coinciding with vigorous vocal fold vibration in the laryngeal HSDIs. In addition, each participant underwent spirometry for measurement of three aerodynamic parameters, including peak expiratory air flow (PEAF), during weak/strong VC tasks. Significant differences were found for both maximum EGG frequency and PEAF between the healthy and UVFP groups and between the weak and strong VC tasks. Among the three cough aerodynamic parameters, PEAF showed the highest positive correlation with the maximum EGG frequency. The correlation coefficients between the maximum EGG frequency and PEAF recorded simultaneously were 0.574 for the whole group, and 0.782/0.717/0.823/0.688 for the male/female/male-healthy/male-UVFP subgroups, respectively. Consequently, the maximum EGG frequency measured in the expiration phase of VC was shown to reflect the velocity of expiratory airflow to some extent and was suggested to be affected by vocal fold physical properties, glottal closure condition, and the expiratory function.

  14. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  15. Correlation and multifractality in climatological time series

    International Nuclear Information System (INIS)

    Pedron, I T

    2010-01-01

    Climate can be described by statistical analysis of mean values of atmospheric variables over a period. It is possible to detect correlations in climatological time series and to classify its behavior. In this work the Hurst exponent, which can characterize correlation and persistence in time series, is obtained by using the Detrended Fluctuation Analysis (DFA) method. Data series of temperature, precipitation, humidity, solar radiation, wind speed, maximum squall, atmospheric pressure and randomic series are studied. Furthermore, the multifractality of such series is analyzed applying the Multifractal Detrended Fluctuation Analysis (MF-DFA) method. The results indicate presence of correlation (persistent character) in all climatological series and multifractality as well. A larger set of data, and longer, could provide better results indicating the universality of the exponents.

  16. Average [O II] nebular emission associated with Mg II absorbers: dependence on Fe II absorption

    Science.gov (United States)

    Joshi, Ravi; Srianand, Raghunathan; Petitjean, Patrick; Noterdaeme, Pasquier

    2018-05-01

    We investigate the effect of Fe II equivalent width (W2600) and fibre size on the average luminosity of [O II] λλ3727, 3729 nebular emission associated with Mg II absorbers (at 0.55 ≤ z ≤ 1.3) in the composite spectra of quasars obtained with 3 and 2 arcsec fibres in the Sloan Digital Sky Survey. We confirm the presence of strong correlations between [O II] luminosity (L_{[O II]}) and equivalent width (W2796) and redshift of Mg II absorbers. However, we show L_{[O II]} and average luminosity surface density suffer from fibre size effects. More importantly, for a given fibre size, the average L_{[O II]} strongly depends on the equivalent width of Fe II absorption lines and found to be higher for Mg II absorbers with R ≡W2600/W2796 ≥ 0.5. In fact, we show the observed strong correlations of L_{[O II]} with W2796 and z of Mg II absorbers are mainly driven by such systems. Direct [O II] detections also confirm the link between L_{[O II]} and R. Therefore, one has to pay attention to the fibre losses and dependence of redshift evolution of Mg II absorbers on W2600 before using them as a luminosity unbiased probe of global star formation rate density. We show that the [O II] nebular emission detected in the stacked spectrum is not dominated by few direct detections (i.e. detections ≥3σ significant level). On an average, the systems with R ≥ 0.5 and W2796 ≥ 2 Å are more reddened, showing colour excess E(B - V) ˜ 0.02, with respect to the systems with R < 0.5 and most likely trace the high H I column density systems.

  17. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  18. Search for few-nucleon correlations in doubly inclusive processes

    International Nuclear Information System (INIS)

    Strikman, M.I.; Frankfurt, L.L.

    1981-01-01

    Earlier work showed that the few-nucleon correlation model is useful in calculation of the inclusive production of cumulative particles at high energies. Certain integrated characteristics of doubly inclusive spectra in high-energy processes are investigated and permit direct information to be obtained on the structure of the correlations. Scattering of a high-energy lepton by a light nucleus with production of a cumulative nucleon is studied, with particular attention to the average transverse momentum of the hadrons recorded, and the doubly inclusive cross section averaged over the transverse momenta of the particles emitted in the forward hemisphere. Expressions are obtained for the integrated cross sections

  19. The development of a tensile-shear punch correlation for yield properties of model austenitic alloys

    Energy Technology Data Exchange (ETDEWEB)

    Hankin, G.L.; Faulkner, R.G. [Loughborough Univ. (United Kingdom); Hamilton, M.L.; Garner, F.A. [Pacific Northwest National Lab., Richland, WA (United States)

    1997-08-01

    The effective shear yield and maximum strengths of a set of neutron-irradiated, isotopically tailored austentic alloys were evaluated using the shear punch test. The dependence on composition and neutron dose showed the same trends as were observed in the corresponding miniature tensile specimen study conducted earlier. A single tensile-shear punch correlation was developed for the three alloys in which the maximum shear stress or Tresca criterion was successfully applied to predict the slope. The correlation will predict the tensile yield strength of the three different austenitic alloys tested to within {+-}53 MPa. The accuracy of the correlation improves with increasing material strength, to within {+-} MPa for predicting tensile yield strengths in the range of 400-800 MPa.

  20. Cross-correlation of motor activity signals from dc-magnetoencephalography, near-infrared spectroscopy, and electromyography.

    Science.gov (United States)

    Sander, Tilmann H; Leistner, Stefanie; Wabnitz, Heidrun; Mackert, Bruno-Marcel; Macdonald, Rainer; Trahms, Lutz

    2010-01-01

    Neuronal and vascular responses due to finger movements were synchronously measured using dc-magnetoencephalography (dcMEG) and time-resolved near-infrared spectroscopy (trNIRS). The finger movements were monitored with electromyography (EMG). Cortical responses related to the finger movement sequence were extracted by independent component analysis from both the dcMEG and the trNIRS data. The temporal relations between EMG rate, dcMEG, and trNIRS responses were assessed pairwise using the cross-correlation function (CCF), which does not require epoch averaging. A positive lag on a scale of seconds was found for the maximum of the CCF between dcMEG and trNIRS. A zero lag is observed for the CCF between dcMEG and EMG. Additionally this CCF exhibits oscillations at the frequency of individual finger movements. These findings show that the dcMEG with a bandwidth up to 8 Hz records both slow and faster neuronal responses, whereas the vascular response is confirmed to change on a scale of seconds.

  1. Climate-simulated raceway pond culturing: quantifying the maximum achievable annual biomass productivity of Chlorella sorokiniana in the contiguous USA

    Energy Technology Data Exchange (ETDEWEB)

    Huesemann, M.; Chavis, A.; Edmundson, S.; Rye, D.; Hobbs, S.; Sun, N.; Wigmosta, M.

    2017-09-13

    Chlorella sorokiniana (DOE 1412) emerged as one of the most promising microalgae strains from the NAABB consortium project, with a remarkable doubling time under optimal conditions of 2.57 hr-1. However, its maximum achievable annual biomass productivity in outdoor ponds in the contiguous United States remained unknown. In order to address this knowledge gap, this alga was cultured in indoor LED-lighted and temperature-controlled raceways in nutrient replete freshwater (BG-11) medium at pH 7 under conditions simulating the daily sunlight intensity and water temperature fluctuations during three seasons in Southern Florida, an optimal outdoor pond culture location for this organism identified by biomass growth modeling. Prior strain characterization indicated that the average maximum specific growth rate (µmax) at 36 ºC declined continuously with pH, with µmax corresponding to 5.92, 5.83, 4.89, and 4.21 day-1 at pH 6, 7, 8, and 9, respectively. In addition, the maximum specific growth rate declined nearly linearly with increasing salinity until no growth was observed above 35 g/L NaCl. In the climate-simulated culturing studies, the volumetric ash-free dry weight-based biomass productivities during the linear growth phase were 57, 69, and 97 mg/L-day for 30-year average light and temperature simulations for January (winter), March (spring), and July (summer), respectively, which corresponds to average areal productivities of 11.6, 14.1, and 19.9 g/m2-day at a constant pond depth of 20.5 cm. The photosynthetic efficiencies (PAR) in the three climate-simulated pond culturing experiments ranged from 4.1 to 5.1%. The annual biomass productivity was estimated as ca. 15 g/m2-day, nearly double the U.S. Department of Energy (DOE) 2015 State of Technology annual cultivation productivity of 8.5 g/m2-day, but this is still significantly below the projected 2022 target of ca. 25 g/m2-day (U.S. DOE, 2016) for economic microalgal biofuel production, indicating the need for

  2. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  3. Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate

    International Nuclear Information System (INIS)

    Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun; Pan Tinsu

    2008-01-01

    . The average maximum and mean γ indices were very low (well below 1), indicating good agreement between dose distributions. Increasing the cine duration generally increased the dose agreement. In the follow-up study, 49 of 50 patients had 100% of points within the PTV pass the γ criteria. The average maximum and mean γ indices were again well below 1, indicating good agreement. Dose calculation on RACT from cine CT is negligibly different from dose calculation on RACT from 4D-CT. Differences can be decreased further by increasing the cine duration of the cine CT scan.

  4. Feynman-α correlation analysis by prompt-photon detection

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Yamada, Sumasu; Hasegawa, Yasuhiro; Horiguchi, Tetsuo

    1998-01-01

    Two-detector Feynman-α measurements were carried out using the UTR-KINKI reactor, a light-water-moderated and graphite-reflected reactor, by detecting high-energy, prompt gamma rays. For comparison, the conventional measurements by detecting neutrons were also performed. These measurements were carried out in the subcriticality range from 0 to $1.8. The gate-time dependence of the variance-and covariance-to-mean ratios measured by gamma-ray detection were nearly identical with those obtained using standard neutron-detection techniques. Consequently, the prompt-neutron decay constants inferred from the gamma-ray correlation data agreed with those from the neutron data. Furthermore, the correlated-to-uncorrelated amplitude ratios obtained by gamma-ray detection significantly depended on the low-energy discriminator level of the single-channel analyzer. The discriminator level was determined as optimum for obtaining a maximum value of the amplitude ratio. The maximum amplitude ratio was much larger than that obtained by neutron detection. The subcriticality dependence of the decay constant obtained by gamma-ray detection was consistent with that obtained by neutron detection and followed the linear relation based on the one-point kinetic model in the vicinity of delayed critical. These experimental results suggest that the gamma-ray correlation technique can be applied to measure reactor kinetic parameters more efficiently

  5. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays

    International Nuclear Information System (INIS)

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. - Highlights: ► Ecotoxicological shows significant benefits for detecting on site contaminations. ► MaxEnt to rebuild qualitative link on concentration and ecotoxicological assays. ► MaxEnt shows similar pattern when compared with concentrations map of groundwater. ► MaxEnt is a valuable method especially when quantitative relation is not at hand. - A Maximum Entropy method to rebuild qualitative relationships between Benzene groundwater concentrations and their ecotoxicological effect.

  6. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    Science.gov (United States)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  7. Correlation between international prostate symptom score and uroflowmetry in patients with benign prostatic hyperplasia.

    Science.gov (United States)

    Oranusi, C K; Nwofor, A E; Mbonu, O

    2017-04-01

    To determine the correlation between severity of symptoms using the International Prostate Symptom Score (IPSS) and uroflowmetry in patients with lower urinary tract symptoms-benign prostatic hyperplasia (LUTS-BPH). We prospectively collected data from 51 consecutive men, who presented with LUTS-BPH at the Nnamdi Azikiwe University Teaching Hospital, Nnewi, Nigeria, from January 2012 through December, 2014. Symptom severity was assessed using the self-administered IPSS questionnaire. We also performed uroflowmetry using the Urodyn 1000 (Dantec, serial no. 5534). The mean age of the patients was 67.2 ± 9.7 years (range 40-89 years). The most common presenting IPSS-LUTS was nocturia (100%) followed by urinary frequency (98%), straining (92.0%), weak stream (84.3%), urgency (41.2%), incomplete voiding (39.2%), and intermittency (35.3%) Most of the patients had moderate symptoms (58.8%) on IPSS with a mean value of 13.5 ± 3.0. The mean Qmax was 15.6 ± 18.7 mL/s and the mean voided volume was 193.0 ± 79.2 mL. About one-third of the patients (39.2%) had an unobstructed flow pattern based on Qmax. Correlation analysis showed a weak correlation between IPSS and voiding time (r = 0.220, P > 0.05), flow time (r = 0.128, P > 0.05), and time to maximum flow (r = 0.246, P > 0.05). These correlations were not significant (P > 0.05). IPSS showed a negative correlation with maximum flow rate (r = 0.368; P 0.05), and voided volume (r = -0.164, P > 0.05). This negative correlation was significant for maximum flow rate. Correlation between IPSS and Qmax was negative but statistically significant. This implies that an inverse relationship exists between IPSS and Qmax, and remains the only important parameter in uroflowmetry. There was no statistically significant correlation between IPSS and the other variables of uroflowmetry.

  8. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  9. RELATIONS BETWEEN ANTHROPOMETRIC CHARACTERISTICS AND COORDINATION IN PEOPLE WITH ABOVE-AVERAGE MOTOR ABILITIES

    Directory of Open Access Journals (Sweden)

    Milan Cvetković

    2011-09-01

    Full Text Available The sample of 149 male persons whose average age is 20.15 in decimal years (±0.83, and all of whom are students at the Faculty of Sport and Physical Education, underwent a battery of tests consisting of 17 anthropometric measures taken from the measures index of the International Biological Program and 4 tests designed to assess coordination as follows: Coordination with stick, Hand and leg drumming, Nonrhythmic drumming and Slalom with three balls. One statistically significant canonical correlation was determined by means of the canonical correlation analysis. The isolated canonical correlation from the space of coordination variables, was the one used for assessment of coordination of the whole body – Coordination with stick. On the other hand, out of the variables from the right array, the ones which covered longinality were singled out – Body height and Arm length, circular dimensionality – Circumference of stretched upper arm, Circumference of bent upper arm and Circumference of upper leg, as well as subcutaneous fat tissue – Skin fold of the back.

  10. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  11. Monodimensional estimation of maximum Reynolds shear stress in the downstream flow field of bileaflet valves.

    Science.gov (United States)

    Grigioni, Mauro; Daniele, Carla; D'Avenio, Giuseppe; Barbaro, Vincenzo

    2002-05-01

    Turbulent flow generated by prosthetic devices at the bloodstream level may cause mechanical stress on blood particles. Measurement of the Reynolds stress tensor and/or some of its components is a mandatory step to evaluate the mechanical load on blood components exerted by fluid stresses, as well as possible consequent blood damage (hemolysis or platelet activation). Because of the three-dimensional nature of turbulence, in general, a three-component anemometer should be used to measure all components of the Reynolds stress tensor, but this is difficult, especially in vivo. The present study aimed to derive the maximum Reynolds shear stress (RSS) in three commercially available prosthetic heart valves (PHVs) of wide diffusion, starting with monodimensional data provided in vivo by echo Doppler. Accurate measurement of PHV flow field was made using laser Doppler anemometry; this provided the principal turbulence quantities (mean velocity, root-mean-square value of velocity fluctuations, average value of cross-product of velocity fluctuations in orthogonal directions) needed to quantify the maximum turbulence-related shear stress. The recorded data enabled determination of the relationship, the Reynolds stresses ratio (RSR) between maximum RSS and Reynolds normal stress in the main flow direction. The RSR was found to be dependent upon the local structure of the flow field. The reported RSR profiles, which permit a simple calculation of maximum RSS, may prove valuable during the post-implantation phase, when an assessment of valve function is made echocardiographically. Hence, the risk of damage to blood constituents associated with bileaflet valve implantation may be accurately quantified in vivo.

  12. DNA pattern recognition using canonical correlation algorithm.

    Science.gov (United States)

    Sarkar, B K; Chakraborty, Chiranjib

    2015-10-01

    We performed canonical correlation analysis as an unsupervised statistical tool to describe related views of the same semantic object for identifying patterns. A pattern recognition technique based on canonical correlation analysis (CCA) was proposed for finding required genetic code in the DNA sequence. Two related but different objects were considered: one was a particular pattern, and other was test DNA sequence. CCA found correlations between two observations of the same semantic pattern and test sequence. It is concluded that the relationship possesses maximum value in the position where the pattern exists. As a case study, the potential of CCA was demonstrated on the sequence found from HIV-1 preferred integration sites. The subsequences on the left and right flanking from the integration site were considered as the two views, and statistically significant relationships were established between these two views to elucidate the viral preference as an important factor for the correlation.

  13. Correlation between average tissue depth data and quantitative accuracy of forensic craniofacial reconstructions measured by geometric surface comparison method.

    Science.gov (United States)

    Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi

    2015-05-01

    Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.

  14. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  15. Vortex-averaged Arctic ozone depletion in the winter 2002/2003

    Directory of Open Access Journals (Sweden)

    T. Christensen

    2005-01-01

    Full Text Available A total ozone depletion of 68±7 Dobson units between 380 and 525K from 10 December 2002 to 10 March 2003 is derived from ozone sonde data by the vortex-average method, taking into account both diabatic descent of the air masses and transport of air into the vortex. When the vortex is divided into three equal-area regions, the results are 85±9DU for the collar region (closest to the edge, 52±5DU for the vortex centre and 68±7DU for the middle region in between centre and collar. Our results compare well with other studies: We find good agreement with ozone loss deduced from SAOZ data, with results inferred from POAM III observations and with results from tracer-tracer correlations using HF as the long-lived tracer. We find a higher ozone loss than that deduced by tracer-tracer correlations using CH4. We have made a careful comparison with Match results: The results were recalculated using a common time period, vortex edge definition and height interval. The two methods generally compare very well, except at the 475K level which exhibits an unexplained discrepancy.

  16. Average electronegativity, electronic polarizability and optical basicity of lanthanide oxides for different coordination numbers

    International Nuclear Information System (INIS)

    Zhao Xinyu; Wang Xiaoli; Lin Hai; Wang Zhiqiang

    2008-01-01

    On the basis of new electronegativity values, electronic polarizability and optical basicity of lanthanide oxides are calculated from the concept of average electronegativity given by Asokamani and Manjula. The estimated values are in close agreement with our previous conclusion. Particularly, we attempt to obtain new data of electronic polarizability and optical basicity of lanthanide sesquioxides for different coordination numbers (6-12). The present investigation suggests that both electronic polarizability and optical basicity increase gradually with increasing coordination number. We also looked for another double peak effect, that is, electronic polarizability and optical basicity of trivalent lanthanide oxides show a gradual decrease and then an abrupt increase at the Europia and Ytterbia. Furthermore, close correlations are investigated among average electronegativity, optical basicity, electronic polarizability and coordination number in this paper

  17. Life Science's Average Publishable Unit (APU Has Increased over the Past Two Decades.

    Directory of Open Access Journals (Sweden)

    Radames J B Cordero

    Full Text Available Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1. By scoring the number of data items (tables and figures, density of composite figures (labeled panels per figure or PPF, as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3, references (approx. 44±18 to 56±24 and authors (approx. 5±3 to 8±9 per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.

  18. Life Science's Average Publishable Unit (APU) Has Increased over the Past Two Decades.

    Science.gov (United States)

    Cordero, Radames J B; de León-Rodriguez, Carlos M; Alvarado-Torres, John K; Rodriguez, Ana R; Casadevall, Arturo

    2016-01-01

    Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1). By scoring the number of data items (tables and figures), density of composite figures (labeled panels per figure or PPF), as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3), references (approx. 44±18 to 56±24) and authors (approx. 5±3 to 8±9) per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.

  19. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  20. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  1. Volatility-constrained multifractal detrended cross-correlation analysis: Cross-correlation among Mainland China, US, and Hong Kong stock markets

    Science.gov (United States)

    Cao, Guangxi; Zhang, Minjia; Li, Qingchen

    2017-04-01

    This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.

  2. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  3. The development of a tensile-shear punch correlation for yield properties of model austenitic alloys

    International Nuclear Information System (INIS)

    Hankin, G.L.; Faulkner, R.G.; Hamilton, M.L.; Garner, F.A.

    1997-01-01

    The effective shear yield and maximum strengths of a set of neutron-irradiated, isotopically tailored austentic alloys were evaluated using the shear punch test. The dependence on composition and neutron dose showed the same trends as were observed in the corresponding miniature tensile specimen study conducted earlier. A single tensile-shear punch correlation was developed for the three alloys in which the maximum shear stress or Tresca criterion was successfully applied to predict the slope. The correlation will predict the tensile yield strength of the three different austenitic alloys tested to within ±53 MPa. The accuracy of the correlation improves with increasing material strength, to within ± MPa for predicting tensile yield strengths in the range of 400-800 MPa

  4. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  5. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  6. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  7. Probabilistic properties of the date of maximum river flow, an approach based on circular statistics in lowland, highland and mountainous catchment

    Science.gov (United States)

    Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz

    2018-04-01

    Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.

  8. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  9. Positronium Yields in Liquids Determined by Lifetime and Angular Correlation Measurements

    DEFF Research Database (Denmark)

    Mogensen, O. E.; Jacobsen, F. M.

    1982-01-01

    hydrocarbons), 3.2 (average value for 8 aromatic hydrocarbons), 2.6 (average value for 5 alcohols). Values of this ratio for various other liquids are also given. The results for the mixtures show how I'3 and I'1, vary as the Ps formation is inhibited (CCl4 mixtures) or enhanced C6F6 mixtures). The most......Positron lifetime and angular correlation spectra were measured for 36 pure liquids, CCl4 mixtures with hexane and diethylether, and C6F6 mixtures with hexane. Apparent ortho-Ps yields, I'3, were determined as the intensity of the long-lived component in the lifetime spectra, while the apparent...... para-Ps yields, I'3, were obtained as the intensity of the narrowest gaussian in a three-gaussian fit to the angular correlation spectra. The ratio I'3/I1, expected to be 3, was found to be instead 2.3 (average value for 3 ethers), 2.5 (average value for 10 linear, branched, and cyclic aliphatic...

  10. Identification of "ever-cropped" land (1984-2010) using Landsat annual maximum NDVI image composites: Southwestern Kansas case study.

    Science.gov (United States)

    Maxwell, Susan K; Sylvester, Kenneth M

    2012-06-01

    A time series of 230 intra- and inter-annual Landsat Thematic Mapper images was used to identify land that was ever cropped during the years 1984 through 2010 for a five county region in southwestern Kansas. Annual maximum Normalized Difference Vegetation Index (NDVI) image composites (NDVI(ann-max)) were used to evaluate the inter-annual dynamics of cropped and non-cropped land. Three feature images were derived from the 27-year NDVI(ann-max) image time series and used in the classification: 1) maximum NDVI value that occurred over the entire 27 year time span (NDVI(max)), 2) standard deviation of the annual maximum NDVI values for all years (NDVI(sd)), and 3) standard deviation of the annual maximum NDVI values for years 1984-1986 (NDVI(sd84-86)) to improve Conservation Reserve Program land discrimination.Results of the classification were compared to three reference data sets: County-level USDA Census records (1982-2007) and two digital land cover maps (Kansas 2005 and USGS Trends Program maps (1986-2000)). Area of ever-cropped land for the five counties was on average 11.8 % higher than the area estimated from Census records. Overall agreement between the ever-cropped land map and the 2005 Kansas map was 91.9% and 97.2% for the Trends maps. Converting the intra-annual Landsat data set to a single annual maximum NDVI image composite considerably reduced the data set size, eliminated clouds and cloud-shadow affects, yet maintained information important for discriminating cropped land. Our results suggest that Landsat annual maximum NDVI image composites will be useful for characterizing land use and land cover change for many applications.

  11. The correlation function for density perturbations in an expanding universe. I - Linear theory

    Science.gov (United States)

    Mcclelland, J.; Silk, J.

    1977-01-01

    The evolution of the two-point correlation function for adiabatic density perturbations in the early universe is studied. Analytical solutions are obtained for the evolution of linearized spherically symmetric adiabatic density perturbations and the two-point correlation function for these perturbations in the radiation-dominated portion of the early universe. The results are then extended to the regime after decoupling. It is found that: (1) adiabatic spherically symmetric perturbations comparable in scale with the maximum Jeans length would survive the radiation-dominated regime; (2) irregular fluctuations are smoothed out up to the scale of the maximum Jeans length in the radiation era, but regular fluctuations might survive on smaller scales; (3) in general, the only surviving structures for irregularly shaped adiabatic density perturbations of arbitrary but finite scale in the radiation regime are the size of or larger than the maximum Jeans length in that regime; (4) infinite plane waves with a wavelength smaller than the maximum Jeans length but larger than the critical dissipative damping scale could survive the radiation regime; and (5) black holes would also survive the radiation regime and might accrete sufficient mass after decoupling to nucleate the formation of galaxies.

  12. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Waligorska, Marta [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland); Molski, Andrzej, E-mail: amolski@amu.edu.pl [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland)

    2012-07-25

    Highlights: Black-Right-Pointing-Pointer We study model selection and parameter recovery from single-molecule FRET experiments. Black-Right-Pointing-Pointer We examine the maximum likelihood-based analysis of two-color photon trajectories. Black-Right-Pointing-Pointer The number of observed photons determines the performance of the method. Black-Right-Pointing-Pointer For long trajectories, one can extract mean dwell times that are comparable to inter-photon times. -- Abstract: When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Foerster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  13. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  14. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  15. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  16. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  17. Power converter with maximum power point tracking MPPT for small wind-electric pumping systems

    International Nuclear Information System (INIS)

    Lara, David; Merino, Gabriel; Salazar, Lautaro

    2015-01-01

    Highlights: • We implement a wind electric pumping system of small power. • The power converter allowed to change the operating point of the electro pump. • Two control techniques were implemented in the power converter. • The control V/f variable allowed to increase the power generated by the permanent magnet generator. - Abstract: In this work, an AC–DC–AC direct-drive power converter was implemented for a wind electric pumping system consisting of a permanent magnet generator (PMG) of 1.3 kW and a peripheral single phase pump of 0.74 kW. In addition, the inverter linear V/f control scheme and the maximum power point tracking (MPPT) algorithm with variable V/f were developed. MPPT algorithm seeks to extract water in a wide range of power input using the maximum amount of wind power available. Experimental trials at different pump pressures were conducted. With a MPPT tracking system with variable V/f, a power value of 1.3 kW was obtained at a speed of 350 rpm and a maximum operating hydraulic head of 50 m. At lower operating heads pressures (between 10 and 40 m), variable V/f control increases the power generated by the PMG compared to the linear V/f control. This increase ranged between 4% and 23% depending on the operating pressure, with an average of 13%, getting close to the maximum electrical power curve of the PMG. The pump was driven at variable frequency reaching a minimum speed of 0.5 times the rated speed. Efficiency of the power converter ranges between 70% and 95% with a power factor between 0.4 and 0.85, depending on the operating pressure

  18. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  19. Maximum Efficiency per Torque Control of Permanent-Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Qingbo Guo

    2016-12-01

    Full Text Available High-efficiency permanent-magnet synchronous machine (PMSM drive systems need not only optimally designed motors but also efficiency-oriented control strategies. However, the existing control strategies only focus on partial loss optimization. This paper proposes a novel analytic loss model of PMSM in either sine-wave pulse-width modulation (SPWM or space vector pulse width modulation (SVPWM which can take into account both the fundamental loss and harmonic loss. The fundamental loss is divided into fundamental copper loss and fundamental iron loss which is estimated by the average flux density in the stator tooth and yoke. In addition, the harmonic loss is obtained from the Bertotti iron loss formula by the harmonic voltages of the three-phase inverter in either SPWM or SVPWM which are calculated by double Fourier integral analysis. Based on the analytic loss model, this paper proposes a maximum efficiency per torque (MEPT control strategy which can minimize the electromagnetic loss of PMSM in the whole operation range. As the loss model of PMSM is too complicated to obtain the analytical solution of optimal loss, a golden section method is applied to achieve the optimal operation point accurately, which can make PMSM work at maximum efficiency. The optimized results between SPWM and SVPWM show that the MEPT in SVPWM has a better effect on the optimization performance. Both the theory analysis and experiment results show that the MEPT control can significantly improve the efficiency performance of the PMSM in each operation condition with a satisfied dynamic performance.

  20. A multimodal stress monitoring system with canonical correlation analysis.

    Science.gov (United States)

    Unsoo Ha; Changhyeon Kim; Yongsu Lee; Hyunki Kim; Taehwan Roh; Hoi-Jun Yoo

    2015-08-01

    The multimodal stress monitoring headband is proposed for mobile stress management system. It is composed of headband and earplugs. Electroencephalography (EEG), hemoencephalography (HEG) and heart-rate variability (HRV) can be achieved simultaneously in the proposed system for user status estimation. With canonical correlation analysis (CCA) and temporal-kernel CCA (tkCCA) algorithm, those different signals can be combined for maximum correlation. Thanks to the proposed combination algorithm, the accuracy of the proposed system increased up to 19 percentage points than unimodal monitoring system in n-back task.

  1. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  2. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  3. Cumulant approach to dynamical correlation functions at finite temperatures

    International Nuclear Information System (INIS)

    Tran Minhtien.

    1993-11-01

    A new theoretical approach, based on the introduction of cumulants, to calculate thermodynamic averages and dynamical correlation functions at finite temperatures is developed. The method is formulated in Liouville instead of Hilbert space and can be applied to operators which do not require to satisfy fermion or boson commutation relations. The application of the partitioning and projection methods for the dynamical correlation functions is discussed. The present method can be applied to weakly as well as to strongly correlated systems. (author). 9 refs

  4. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  5. L2 Reading Comprehension and Its Correlates: A Meta-Analysis

    Science.gov (United States)

    Jeon, Eun Hee; Yamashita, Junko

    2014-01-01

    The present meta-analysis examined the overall average correlation (weighted for sample size and corrected for measurement error) between passage-level second language (L2) reading comprehension and 10 key reading component variables investigated in the research domain. Four high-evidence correlates (with 18 or more accumulated effect sizes: L2…

  6. Transcranial Doppler ultrasonography in children with sickle cell anemia: Clinical and laboratory correlates for elevated blood flow velocities.

    Science.gov (United States)

    Lagunju, IkeOluwa; Sodeinde, Olugbemiro; Brown, Biobele; Akinbami, Felix; Adedokun, Babatunde

    2014-02-01

    Transcranial Doppler (TCD) sonography of major cerebral arteries is now recommended for routine screening for stroke risk in children with sickle cell disease (SCD). We performed TCD studies on children with sickle cell anemia (SCA) seen at the pediatric hematology clinic over a period of 2 years. TCD scans were repeated yearly in children with normal flow velocities and every 3 months in children with elevated velocities. Findings were correlated with clinical variables, hematologic indices, and arterial oxygen saturation. Predictors of elevated velocities were identified by multiple linear regressions. We enrolled 237 children and performed a total of 526 TCD examinations. Highest time-averaged maximum flow velocities were ≥170 cm/s in 72 (30.3%) cases and ≥200 cm/s in 20 (8.4%). Young age, low hematocrit, low hemoglobin, and arterial oxygen desaturation <95% showed significant correlations with presence of increased cerebral flow velocities. Low hematocrit, low hemoglobin concentration, young age, and low arterial oxygen desaturation predicted elevated cerebral blood flow velocities and, invariably, increased stroke risk, in children with SCA. Children who exhibit these features should be given high priority for TCD examination in the setting of limited resources. Copyright © 2013 Wiley Periodicals, Inc.

  7. Relationship research between meteorological disasters and stock markets based on a multifractal detrending moving average algorithm

    Science.gov (United States)

    Li, Qingchen; Cao, Guangxi; Xu, Wei

    2018-01-01

    Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.

  8. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  9. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  10. Cross-Correlation Asymmetries and Causal Relationships between Stock and Market Risk

    Science.gov (United States)

    Borysov, Stanislav S.; Balatsky, Alexander V.

    2014-01-01

    We study historical correlations and lead-lag relationships between individual stock risk (volatility of daily stock returns) and market risk (volatility of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over all stocks, using 71 stock prices from the Standard & Poor's 500 index for 1994–2013. We focus on the behavior of the cross-correlations at the times of financial crises with significant jumps of market volatility. The observed historical dynamics showed that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining at that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when the volatility of an individual stock follows the market volatility and vice versa. PMID:25162697

  11. Cross-correlation asymmetries and causal relationships between stock and market risk.

    Science.gov (United States)

    Borysov, Stanislav S; Balatsky, Alexander V

    2014-01-01

    We study historical correlations and lead-lag relationships between individual stock risk (volatility of daily stock returns) and market risk (volatility of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over all stocks, using 71 stock prices from the Standard & Poor's 500 index for 1994-2013. We focus on the behavior of the cross-correlations at the times of financial crises with significant jumps of market volatility. The observed historical dynamics showed that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining at that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when the volatility of an individual stock follows the market volatility and vice versa.

  12. Strongly coupled fluid-particle flows in vertical channels. I. Reynolds-averaged two-phase turbulence statistics

    International Nuclear Information System (INIS)

    Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.

    2016-01-01

    Simulations of strongly coupled (i.e., high-mass-loading) fluid-particle flows in vertical channels are performed with the purpose of understanding the fundamental physics of wall-bounded multiphase turbulence. The exact Reynolds-averaged (RA) equations for high-mass-loading suspensions are presented, and the unclosed terms that are retained in the context of fully developed channel flow are evaluated in an Eulerian–Lagrangian (EL) framework for the first time. A key distinction between the RA formulation presented in the current work and previous derivations of multiphase turbulence models is the partitioning of the particle velocity fluctuations into spatially correlated and uncorrelated components, used to define the components of the particle-phase turbulent kinetic energy (TKE) and granular temperature, respectively. The adaptive spatial filtering technique developed in our previous work for homogeneous flows [J. Capecelatro, O. Desjardins, and R. O. Fox, “Numerical study of collisional particle dynamics in cluster-induced turbulence,” J. Fluid Mech. 747, R2 (2014)] is shown to accurately partition the particle velocity fluctuations at all distances from the wall. Strong segregation in the components of granular energy is observed, with the largest values of particle-phase TKE associated with clusters falling near the channel wall, while maximum granular temperature is observed at the center of the channel. The anisotropy of the Reynolds stresses both near the wall and far away is found to be a crucial component for understanding the distribution of the particle-phase volume fraction. In Part II of this paper, results from the EL simulations are used to validate a multiphase Reynolds-stress turbulence model that correctly predicts the wall-normal distribution of the two-phase turbulence statistics.

  13. Strongly coupled fluid-particle flows in vertical channels. I. Reynolds-averaged two-phase turbulence statistics

    Science.gov (United States)

    Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.

    2016-03-01

    Simulations of strongly coupled (i.e., high-mass-loading) fluid-particle flows in vertical channels are performed with the purpose of understanding the fundamental physics of wall-bounded multiphase turbulence. The exact Reynolds-averaged (RA) equations for high-mass-loading suspensions are presented, and the unclosed terms that are retained in the context of fully developed channel flow are evaluated in an Eulerian-Lagrangian (EL) framework for the first time. A key distinction between the RA formulation presented in the current work and previous derivations of multiphase turbulence models is the partitioning of the particle velocity fluctuations into spatially correlated and uncorrelated components, used to define the components of the particle-phase turbulent kinetic energy (TKE) and granular temperature, respectively. The adaptive spatial filtering technique developed in our previous work for homogeneous flows [J. Capecelatro, O. Desjardins, and R. O. Fox, "Numerical study of collisional particle dynamics in cluster-induced turbulence," J. Fluid Mech. 747, R2 (2014)] is shown to accurately partition the particle velocity fluctuations at all distances from the wall. Strong segregation in the components of granular energy is observed, with the largest values of particle-phase TKE associated with clusters falling near the channel wall, while maximum granular temperature is observed at the center of the channel. The anisotropy of the Reynolds stresses both near the wall and far away is found to be a crucial component for understanding the distribution of the particle-phase volume fraction. In Part II of this paper, results from the EL simulations are used to validate a multiphase Reynolds-stress turbulence model that correctly predicts the wall-normal distribution of the two-phase turbulence statistics.

  14. Hepatic computed tomography perfusion. Comparison of maximum slope and dual-input single-compartment methods

    International Nuclear Information System (INIS)

    Kanda, Tomonori; Yoshikawa, Takeshi; Ohno, Yoshiharu; Kanata, Naoki; Koyama, Hisanobu; Nogami, Munenobu; Takenaka, Daisuke; Sugimura, Kazuro

    2010-01-01

    The aim of the study was to compare two analytical methods-maximum slope (MS) and the dualinput single-compartment model (CM)-in computed tomography (CT) measurements of hepatic perfusion and to assess the effects of extrahepatic systemic factors. A total of 109 patients underwent hepatic CT perfusion. The scans were conducted at the hepatic hilum 7-77 s after administration of contrast material. Hepatic arterial perfusion (HAP) and portal perfusion (HPP) (ml/min/100 ml) and the arterial perfusion fraction (APF, %) were calculated with the two methods, followed by correlation assessment. Partial correlation analysis was used to assess the effects on hepatic perfusion values by various factors, including age, sex, risk of cardiovascular disease, compensation for respiratory misregistration, arrival time of contrast material at the abdominal aorta, transit time from abdominal aorta to hepatic parenchyma, and liver dysfunction. The mean HAPs, HPPs, and APFs were, respectively, 31.4, 104.2, and 23.9 for MS and 27.1, 141.3, and 22.1 for CM. HAP and APF showed significant (P<0.0001) and moderate correlation (γ=0.417 and 0.548) and HPP showed poor correlation (γ=0.172) between the two methods. While MS showed weak correlations (γ=-0.39 to 0.34; P<0.001 to <0.02) between multiple extrahepatic factors and perfusion values, CM showed weak correlation only between the patients' sex and HAP (γ=0.31, P=0.001). Hepatic perfusion values estimated by the two methods are not interchangeable. CM is less susceptible to extrahepatic systemic factors. (author)

  15. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  16. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  17. GAS SURFACE DENSITY, STAR FORMATION RATE SURFACE DENSITY, AND THE MAXIMUM MASS OF YOUNG STAR CLUSTERS IN A DISK GALAXY. II. THE GRAND-DESIGN GALAXY M51

    International Nuclear Information System (INIS)

    González-Lópezlira, Rosa A.; Pflamm-Altenburg, Jan; Kroupa, Pavel

    2013-01-01

    We analyze the relationship between maximum cluster mass and surface densities of total gas (Σ gas ), molecular gas (Σ H 2 ), neutral gas (Σ H I ), and star formation rate (Σ SFR ) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M 3rd ∝Σ H I 0.4±0.2 , whereM 3rd is the median of the five most massive clusters. There is no correlation withΣ gas ,Σ H2 , orΣ SFR . For clusters younger than 10 Myr, M 3rd ∝Σ H I 0.6±0.1 and M 3rd ∝Σ gas 0.5±0.2 ; there is no correlation with either Σ H 2 orΣ SFR . The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M 3rd ∝Σ gas 3.8±0.3 , M 3rd ∝Σ H 2 1.2±0.1 , and M 3rd ∝Σ SFR 0.9±0.1 . For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet traveled too far from their birth sites, the poor resolution of the radio data compared to the physical sizes of the clusters results in measuredΣ that are likely quite diluted compared to the actual densities relevant for the formation of the clusters.

  18. Changes in Average Annual Precipitation in Argentina’s Pampa Region and Their Possible Causes

    Directory of Open Access Journals (Sweden)

    Silvia Pérez

    2015-01-01

    Full Text Available Changes in annual rainfall in five sub-regions of the Argentine Pampa Region (Rolling, Central, Mesopotamian, Flooding and Southern were examined for the period 1941 to 2010 using data from representative locations in each sub-region. Dubious series were adjusted by means of a homogeneity test and changes in mean value were evaluated using a hydrometeorological time series segmentation method. In addition, an association was sought between shifts in mean annual rainfall and changes in large-scale atmospheric pressure systems, as measured by the Atlantic Multidecadal Oscillation (AMO, the Pacific Decadal Oscillation (PDO and the Southern Oscillation Index (SOI. The results indicate that the Western Pampas (Central and Southern are more vulnerable to abrupt changes in average annual rainfall than the Eastern Pampas (Mesopotamian, Rolling and Flooding. Their vulnerability is further increased by their having the lowest average rainfall. The AMO showed significant negative correlations with all sub-regions, while the PDO and SOI showed significant positive and negative correlations respectively with the Central, Flooding and Southern Pampa. The fact that the PDO and AMO are going through the phases of their cycles that tend to reduce rainfall in much of the Pampas helps explain the lower rainfall recorded in the Western Pampas sub-regions in recent years. This has had a significant impact on agriculture and the environment.

  19. Correlation of arbuscular mycorrhizal colonization with plant growth, nodulation, and shoot npk in legumes

    International Nuclear Information System (INIS)

    Javaid, A.; Anjum, T.; Shah, M.H.M.

    2007-01-01

    Correlation of arbuscular mycorrhizal colonization with different root and shoot growth, nodulation and shoot NPK parameters was studied in three legumes viz. Trifolium alexandrianum, Medicago polymorpha and Melilotus parviflora. The three test legume species showed different patterns of root and shoot growth, nodulation, mycorrhizal colonization and shoot N, P and K content. Different mycorrhizal structures viz. mycelium, arbuscules and vesicles showed different patters of correlation with different studied parameters. Mycelial infection showed an insignificantly positive correlation with root and shoot dry biomass and total root length. Maximum root length was however, negatively associated with mycelial infection. Both arbuscular and vesicular infections were negatively correlated with shoot dry biomass and different parameters of root growth. The association between arbuscular infection and maximum root length was significant. All the three mycorrhizal structures showed a positive correlation with number and biomass of nodules. The association between arbuscular infection and nodule number was significant. Mycelial infection was positively correlated with percentage and total shoot N and P. Similarly percentage N was also positively correlated with arbuscular and vesicular infections. By contrast, total shoot N showed a negative association with arbuscular as well as vesicular infections. Similarly both percentage and total shoot P were negatively correlated with arbuscular and vesicular infections. All the associations between mycorrhizal parameters and shoot K were negative except between vesicular infection and shoot %K. (author)

  20. Final Scientific/Technical Report: Breakthrough Design and Implementation of Many-Body Theories for Electron Correlation

    Energy Technology Data Exchange (ETDEWEB)

    So Hirata

    2012-01-03

    This report discusses the following highlights of the project: (1) grid-based Hartree-Fock equation solver; (2) explicitly correlated coupled-cluster and perturbation methods; (3) anharmonic vibrational frequencies and vibrationally averaged NMR and structural parameters of FHF; (4) anharmonic vibrational frequencies and vibrationally averaged structures of hydrocarbon combustion species; (5) anharmonic vibrational analysis of the guanine-cytosine base pair; (6) the nature of the Born-Oppenheimer approximation; (7) Polymers and solids Brillouin-zone downsampling - the modulo MP2 method; (8) explicitly correlated MP2 for extended systems; (9) fast correlated method for molecular crystals - solid formic acid; and (10) fast correlated method for molecular crystals - solid hydrogen fluoride.

  1. The association between estimated average glucose levels and fasting plasma glucose levels in a rural tertiary care centre

    Directory of Open Access Journals (Sweden)

    Raja Reddy P

    2013-01-01

    Full Text Available The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient’s blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 60- 90 days, average blood glucose levels can be estimated using HbA1c levels. Aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. Methods: Type 2 diabetes patients attending medicine outpatient department of RL Jalappa hospital, Kolar between March 2010 and July 2012 were taken. The estimated glucose levels (mg/dl were calculated using the following formula: 28.7 x HbA1c-46.7. Glucose levels were determined using the hexokinase method. HbA1c levels were determined using an HPLC method. Correlation and independent t- test was the test of significance for quantitative data. Results: A strong positive correlation between fasting plasma glucose level and estimated average blood glucose levels (r=0.54, p=0.0001 was observed. The difference was statistically significant. Conclusion: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  2. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  3. Measurement of the average polarization of b baryons in hadronic $Z^0$ decays

    CERN Document Server

    Abbiendi, G.; Alexander, G.; Allison, John; Altekamp, N.; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Bartoldus, R.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Bentvelsen, S.; Bethke, S.; Betts, S.; Biebel, O.; Biguzzi, A.; Bird, S.D.; Blobel, V.; Bloodworth, I.J.; Bobinski, M.; Bock, P.; Bohme, J.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Burgard, C.; Burgin, R.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Davis, R.; De Jong, S.; del Pozo, L.A.; De Roeck, A.; Desch, K.; Dienes, B.; Dixit, M.S.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Eatough, D.; Estabrooks, P.G.; Etzion, E.; Evans, H.G.; Fabbri, F.; Fanti, M.; Faust, A.A.; Fiedler, F.; Fierro, M.; Fleck, I.; Folman, R.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gascon, J.; Gascon-Shotkin, S.M.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Gibson, V.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Hargrove, C.K.; Hartmann, C.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herndon, M.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hillier, S.J.; Hobson, P.R.; Hocker, James Andrew; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Koetke, D.S.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kyberd, P.; Lafferty, G.D.; Lanske, D.; Lauber, J.; Lautenschlager, S.R.; Lawson, I.; Layter, J.G.; Lazic, D.; Lee, A.M.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Ludwig, J.; Lui, D.; Macchiolo, A.; Macpherson, A.; Mader, W.; Mannelli, M.; Marcellini, S.; Markopoulos, C.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menke, S.; Merritt, F.S.; Mes, H.; Meyer, J.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mir, R.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nellen, B.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oreglia, M.J.; Orito, S.; Palinkas, J.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Polok, J.; Przybycien, M.; Rembser, C.; Rick, H.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharf, F.; Scharff-Hansen, P.; Schieck, J.; Schmitt, B.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Sittler, A.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Sproston, M.; Stahl, A.; Stephens, K.; Steuerer, J.; Stoll, K.; Strom, David M.; Strohmer, R.; Surrow, B.; Talbot, S.D.; Tanaka, S.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomson, M.A.; von Torne, E.; Torrence, E.; Towers, S.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turcot, A.S.; Turner-Watson, M.F.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Voss, H.; Wackerle, F.; Wagner, A.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Yekutieli, G.; Zacek, V.; Zer-Zion, D.

    1998-01-01

    In the Standard Model, b quarks produced in e^+e^- annihilation at the Z^0 peak have a large average longitudinal polarization of -0.94. Some fraction of this polarization is expected to be transferred to b-flavored baryons during hadronization. The average longitudinal polarization of weakly decaying b baryons, , is measured in approximately 4.3 million hadronic Z^0 decays collected with the OPAL detector between 1990 and 1995 at LEP. Those b baryons that decay semileptonically and produce a \\Lambda baryon are identified through the correlation of the baryon number of the \\Lambda and the electric charge of the lepton. In this semileptonic decay, the ratio of the neutrino energy to the lepton energy is a sensitive polarization observable. The neutrino energy is estimated using missing energy measurements. From a fit to the distribution of this ratio, the value = -0.56^{+0.20}_{-0.13} +/- 0.09 is obtained, where the first error is statistical and the second systematic.

  4. Asymmetric correlation matrices: an analysis of financial data

    Science.gov (United States)

    Livan, G.; Rebecchi, L.

    2012-06-01

    We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.

  5. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  6. HEDL empirical correlation of fuel pin top failure thresholds, status 1976

    International Nuclear Information System (INIS)

    Baars, R.E.

    1976-01-01

    The Damage Parameter (DP) empirical correlation of fuel pin cladding failure thresholds for TOP events has been revised and recorrelated to the results of twelve TREAT tests. The revised correlation, called the Failure Potential (FP) correlation, predicts failure times for the tests in the data base with an average error of 35 ms for $3/s tests and of 150 ms for 50 cents/s tests

  7. Complementary Set Matrices Satisfying a Column Correlation Constraint

    OpenAIRE

    Wu, Di; Spasojevic, Predrag

    2006-01-01

    Motivated by the problem of reducing the peak to average power ratio (PAPR) of transmitted signals, we consider a design of complementary set matrices whose column sequences satisfy a correlation constraint. The design algorithm recursively builds a collection of $2^{t+1}$ mutually orthogonal (MO) complementary set matrices starting from a companion pair of sequences. We relate correlation properties of column sequences to that of the companion pair and illustrate how to select an appropriate...

  8. The Bass diffusion model on networks with correlations and inhomogeneous advertising

    Science.gov (United States)

    Bertotti, M. L.; Brunner, J.; Modanese, G.

    2016-09-01

    The Bass model, which is an effective forecasting tool for innovation diffusion based on large collections of empirical data, assumes an homogeneous diffusion process. We introduce a network structure into this model and we investigate numerically the dynamics in the case of networks with link density $P(k)=c/k^\\gamma$, where $k=1, \\ldots , N$. The resulting curve of the total adoptions in time is qualitatively similar to the homogeneous Bass curve corresponding to a case with the same average number of connections. The peak of the adoptions, however, tends to occur earlier, particularly when $\\gamma$ and $N$ are large (i.e., when there are few hubs with a large maximum number of connections). Most interestingly, the adoption curve of the hubs anticipates the total adoption curve in a predictable way, with peak times which can be, for instance when $N=100$, between 10% and 60% of the total adoptions peak. This may allow to monitor the hubs for forecasting purposes. We also consider the case of networks with assortative and disassortative correlations and a case of inhomogeneous advertising where the publicity terms are "targeted" on the hubs while maintaining their total cost constant.

  9. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  10. Acute Oxidative Effect and Muscle Damage after a Maximum 4 Min Test in High Performance Athletes.

    Directory of Open Access Journals (Sweden)

    Heros Ribeiro Ferreira

    Full Text Available The purpose of this investigation was to determine lipid peroxidation markers, physiological stress and muscle damage in elite kayakers in response to a maximum 4-min kayak ergometer test (KE test, and possible correlations with individual 1000m kayaking performances. The sample consisted of twenty-three adult male and nine adult female elite kayakers, with more than three years' experience in international events, who voluntarily took part in this study. The subjects performed a 10-min warm-up, followed by a 2-min passive interval, before starting the test itself, which consisted of a maximum 4-min work paddling on an ergometer; right after the end of the test, an 8 ml blood sample was collected for analysis. 72 hours after the test, all athletes took part in an official race, when then it was possible to check their performance in the on site K1 1000m test (P1000m. The results showed that all lipoproteins and hematological parameters tested presented a significant difference (p≤0.05 after exercise for both genders. In addition, parameters related to muscle damage such as lactate dehydrogenase (LDH and creatine kinase (CK presented significant differences after stress. Uric acid presented an inverse correlation with the performance (r = -0.76, while CK presented a positive correlation (r = 0.46 with it. Based on these results, it was possible to verify muscle damage and the level of oxidative stress caused by indoor training with specific ergometers for speed kayaking, highlighting the importance of analyzing and getting to know the physiological responses to this type of training, in order to provide information to coaches and optimize athletic performance.

  11. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  12. Correlation between the Open-Circuit Voltage and Charge Transfer State Energy in Organic Photovoltaic Cells.

    Science.gov (United States)

    Zou, Yunlong; Holmes, Russell J

    2015-08-26

    In order to further improve the performance of organic photovoltaic cells (OPVs), it is essential to better understand the factors that limit the open-circuit voltage (VOC). Previous work has sought to correlate the value of VOC in donor-acceptor (D-A) OPVs to the interface energy level offset (EDA). In this work, measurements of electroluminescence are used to extract the charge transfer (CT) state energy for multiple small molecule D-A pairings. The CT state as measured from electroluminescence is found to show better correlation to the maximum VOC than EDA. The difference between EDA and the CT state energy is attributed to the Coulombic binding energy of the CT state. This correlation is demonstrated explicitly by inserting an insulating spacer layer between the donor and acceptor materials, reducing the binding energy of the CT state and increasing the measured VOC. These results demonstrate a direct correlation between maximum VOC and CT state energy.

  13. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  14. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  15. Cognitive Capitalism: Economic Freedom Moderates the Effects of Intellectual and Average Classes on Economic Productivity.

    Science.gov (United States)

    Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale

    2016-10-01

    Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom. © The Author(s) 2016.

  16. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  17. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  18. Sleep pattern in patients with Chronic Obstructive Pulmonary Disease and correlation among gasometric, spirometric, and polysomnographic variables

    Directory of Open Access Journals (Sweden)

    Santos Carlos Eduardo Ventura Gaio dos

    2003-01-01

    Full Text Available OBJECTIVE: There are few studies on chronic obstructive pulmonary disease (COPD establishing differences between the functional parameters of the disease and sleep variables. The aim of the study was to describe the sleep pattern of these patients and to correlate spirometric, gasometric and polysomnographic variables. METHODS: Transversal study using COPD patients submitted to spirometry, arterial gasometry, and polysomnography. RESULTS: 21 male patients were studied with average age = 67 ± 9; 7 ± 4 average points in the Epworth sleepiness scale, average Tiffenau's index (FEV1/FVC = 54 ± 13.0%, average PaO2 = 68 ± 11 mmHg, average PaCO2 = 37 ± 6 mmHg. Sleep efficiency decreased (65 ± 16% with the reduction of slow wave sleep (8 ± 9% and rapid eye movement (REM sleep (15 ± 8%. Average T90 was 43 ± 41%. Average apnea-hypopnea index (AHI = 3 ± 5/h, where two patients (9.5% presented obstructive sleep apnea. A significant correlation was observed between PaO2 and T90 (p < 0.01, PaCO2 and T90 (p < 0.05, and AHI and the cardiac rate during REM (p < 0.01. A higher number of arousals and stage change was observed. There was no linear correlation between spirometric and polysomnographic variables. CONCLUSION: Poor sleep quality of these patients was characterized by low sleep efficiency, high number of awakenings and shift of stages. There were no correlations between the spirometric and polysomnographic variables.

  19. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  20. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  1. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  2. Correlation of patient maximum skin doses in cardiac procedures with various dose indicators

    International Nuclear Information System (INIS)

    Domienik, J.; Papierz, S.; Jankowski, J.; Peruga, J.Z.; Werduch, A.; Religa, W.

    2008-01-01

    In most countries of European Union, legislation requires the determination of the total skin dose received by patients during interventional procedures in order to prevent deterministic damages. Various dose indicators like dose-area product (DAP), cumulative dose (CD) and entrance dose at the patient plane (EFD) are used for patient dosimetry purposes in clinical practice. This study aimed at relating those dose indicators with doses ascribed to the most irradiated areas of the patient skin usually expressed in terms of local maximal skin dose (MSD). The study was performed in two different facilities for two most common cardiac procedures coronary angiography (CA) and percutaneous coronary interventions (PCI). For CA procedures, the registered values of fluoroscopy time, total DAP and MSD were in the range (0.7-27.3) min, (16-317) Gy cm 2 and (43-1507) mGy, respectively, and for interventions, accordingly (2.1-43.6) min, (17-425) Gy cm 2 , (71-1555) mGy. Moreover, for CA procedures, CD and EFD were in the ranges (295-4689) mGy and (121-1768) mGy and for PCI (267-6524) mGy and (68-2279) mGy, respectively. No general and satisfactory correlation was found for safe estimation of MSD. However, results show that the best dose indicator which might serve for rough, preliminary estimation is DAP value. In the study, the appropriate trigger levels were proposed for both facilities. (authors)

  3. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  4. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    International Nuclear Information System (INIS)

    Bashahu, M.

    2003-01-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H d ) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K d and N/N d , Ne/8 or K t . Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  5. How structure determines correlations in neuronal networks.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2011-05-01

    Full Text Available Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.

  6. Dosimetric consequences of planning lung treatments on 4DCT average reconstruction to represent a moving tumour

    International Nuclear Information System (INIS)

    Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.

    2010-01-01

    Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.

  7. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  8. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  9. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  10. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  11. Visualization of Radial Peripapillary Capillaries Using Optical Coherence Tomography Angiography: The Effect of Image Averaging.

    Directory of Open Access Journals (Sweden)

    Shelley Mo

    Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.

  12. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  13. The drift flux correlation in RELAP-UK

    International Nuclear Information System (INIS)

    Holmes, J.A.

    1977-11-01

    A numerical technique for modelling the effects of drift flux in vertical channels is described, which has been included in the RELAP-UK code for the analysis of loss of coolant accidents. It is based on the assumption that the difference between the average velocities of the steam and water phases is a result of the linear superposition of the profile slip and the local slip. The profile slip may be obtained from a choice of profile slip correlations, which includes the flow-dependent Bryce correlation, modified at low void fractions to be consistent with the analysis of Zuber and Findlay. Comparisons are given between the drift flux correlation and certain sets of published experimental data. (author)

  14. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  15. IMPROVING CORRELATION FUNCTION FITTING WITH RIDGE REGRESSION: APPLICATION TO CROSS-CORRELATION RECONSTRUCTION

    International Nuclear Information System (INIS)

    Matthews, Daniel J.; Newman, Jeffrey A.

    2012-01-01

    Cross-correlation techniques provide a promising avenue for calibrating photometric redshifts and determining redshift distributions using spectroscopy which is systematically incomplete (e.g., current deep spectroscopic surveys fail to obtain secure redshifts for 30%-50% or more of the galaxies targeted). In this paper, we improve on the redshift distribution reconstruction methods from our previous work by incorporating full covariance information into our correlation function fits. Correlation function measurements are strongly covariant between angular or spatial bins, and accounting for this in fitting can yield substantial reduction in errors. However, frequently the covariance matrices used in these calculations are determined from a relatively small set (dozens rather than hundreds) of subsamples or mock catalogs, resulting in noisy covariance matrices whose inversion is ill-conditioned and numerically unstable. We present here a method of conditioning the covariance matrix known as ridge regression which results in a more well behaved inversion than other techniques common in large-scale structure studies. We demonstrate that ridge regression significantly improves the determination of correlation function parameters. We then apply these improved techniques to the problem of reconstructing redshift distributions. By incorporating full covariance information, applying ridge regression, and changing the weighting of fields in obtaining average correlation functions, we obtain reductions in the mean redshift distribution reconstruction error of as much as ∼40% compared to previous methods. We provide a description of POWERFIT, an IDL code for performing power-law fits to correlation functions with ridge regression conditioning that we are making publicly available.

  16. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  17. Infinite range correlations of intensity in random media

    Indian Academy of Sciences (India)

    These correlations originate from scattering events which take place close to a ... tribution of the C0-term to the general four-point function, defined as .... product of two average intensities, which cancel the denominator in the definition of the.

  18. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  19. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    Science.gov (United States)

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  20. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...