WorldWideScience

Sample records for maximum average correlation

  1. Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study

    Directory of Open Access Journals (Sweden)

    Mary Hokazono

    Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.

  2. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  3. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  4. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  5. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  6. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  7. Table for monthly average daily extraterrestrial irradiation on horizontal surface and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-01-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)

  8. Minimum disturbance rewards with maximum possible classical correlations

    Energy Technology Data Exchange (ETDEWEB)

    Pande, Varad R., E-mail: varad_pande@yahoo.in [Department of Physics, Indian Institute of Science Education and Research Pune, 411008 (India); Shaji, Anil [School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, 695016 (India)

    2017-07-12

    Weak measurements done on a subsystem of a bipartite system having both classical and nonClassical correlations between its components can potentially reveal information about the other subsystem with minimal disturbance to the overall state. We use weak quantum discord and the fidelity between the initial bipartite state and the state after measurement to construct a cost function that accounts for both the amount of information revealed about the other system as well as the disturbance to the overall state. We investigate the behaviour of the cost function for families of two qubit states and show that there is an optimal choice that can be made for the strength of the weak measurement. - Highlights: • Weak measurements done on one part of a bipartite system with controlled strength. • Weak quantum discord & fidelity used to quantify all correlations and disturbance. • Cost function to probe the tradeoff between extracted correlations and disturbance. • Optimal measurement strength for maximum extraction of classical correlations.

  9. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  10. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  11. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  12. Correlation between maximum isometric strength variables and specific performance of Brazilian military judokas

    Directory of Open Access Journals (Sweden)

    Michel Moraes Gonçalves

    2017-06-01

    Full Text Available It was our objective to correlate specific performance in the Special Judo Fitness Test (SJFT and the maximum isometric handgrip (HGSMax, scapular traction (STSMax and lumbar traction (LTSMax strength tests in military judo athletes. Twenty-two military athletes from the judo team of the Brazilian Navy Almirante Alexandrino Instruction Centre, with average age of 26.14 ± 3.31 years old, and average body mass of 83.23 ± 14.14 kg participated in the study. Electronic dynamometry tests for HGSMax, STSMax and LTSMax were conducted. Then, after approximately 1 hour-interval, the SJFT protocol was applied. All variables were adjusted to the body mass of the athletes. Pearson correlation coefficient for statistical analysis was used. The results showed moderate negative correlation between the SJFT index and STSMax (r= -0.550, p= 0.008, strong negative correlations between the SJFT index and HGSMax (r= -0.706, p< 0.001, SJFT index and LTSMax (r= -0.721; p= 0.001, besides the correlation between the sum of the three maximum isometric strength tests and the SJFT index (r= -0.786, p< 0.001. This study concludes that negative correlations occur between the SJFT index and maximum isometric handgrip, shoulder and lumbar traction strength and the sum of the three maximum isometric strength tests in military judokas.

  13. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  14. Relative azimuth inversion by way of damped maximum correlation estimates

    Science.gov (United States)

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  15. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  16. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  17. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  18. Disentangling multi-level systems: averaging, correlations and memory

    International Nuclear Information System (INIS)

    Wouters, Jeroen; Lucarini, Valerio

    2012-01-01

    We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems

  19. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  20. SU-E-T-174: Evaluation of the Optimal Intensity Modulated Radiation Therapy Plans Done On the Maximum and Average Intensity Projection CTs

    Energy Technology Data Exchange (ETDEWEB)

    Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)

    2014-06-01

    Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.

  1. The asymptotic behaviour of the maximum likelihood function of Kriging approximations using the Gaussian correlation function

    CSIR Research Space (South Africa)

    Kok, S

    2012-07-01

    Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...

  2. An inequality between the weighted average and the rowwise correlation coefficient for proximity matrices

    NARCIS (Netherlands)

    Krijnen, WP

    De Vries (1993) discusses Pearson's product-moment correlation, Spearman's rank correlation, and Kendall's rank-correlation coefficient for assessing the association between the rows of two proximity matrices. For each of these he introduces a weighted average variant and a rowwise variant. In this

  3. AN INEQUALITY BETWEEN THE WEIGHTED AVERAGE AND THE ROWWISE CORRELATION-COEFFICIENT FOR PROXIMITY MATRICES

    NARCIS (Netherlands)

    KRIJNEN, WP

    De Vries (1993) discusses Pearson's product-moment correlation, Spearman's rank correlation, and Kendall's rank-correlation coefficient for assessing the association between the rows of two proximity matrices. For each of these he introduces a weighted average variant and a rowwise variant. In this

  4. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    Science.gov (United States)

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  5. ANALYSIS OF THE STATISTICAL BEHAVIOUR OF DAILY MAXIMUM AND MONTHLY AVERAGE RAINFALL ALONG WITH RAINY DAYS VARIATION IN SYLHET, BANGLADESH

    Directory of Open Access Journals (Sweden)

    G. M. J. HASAN

    2014-10-01

    Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.

  6. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    Energy Technology Data Exchange (ETDEWEB)

    Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.

  7. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    International Nuclear Information System (INIS)

    Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error

  8. The classical correlation limits the ability of the measurement-induced average coherence

    Science.gov (United States)

    Zhang, Jun; Yang, Si-Ren; Zhang, Yang; Yu, Chang-Shui

    2017-04-01

    Coherence is the most fundamental quantum feature in quantum mechanics. For a bipartite quantum state, if a measurement is performed on one party, the other party, based on the measurement outcomes, will collapse to a corresponding state with some probability and hence gain the average coherence. It is shown that the average coherence is not less than the coherence of its reduced density matrix. In particular, it is very surprising that the extra average coherence (and the maximal extra average coherence with all the possible measurements taken into account) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation. We also find the sufficient and necessary condition for the null maximal extra average coherence. Some examples demonstrate the relation and, moreover, show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement. In addition, the similar conclusions are drawn for both the basis-dependent and the basis-free coherence measure.

  9. Correlation between Grade Point Averages and Student Evaluation of Teaching Scores: Taking a Closer Look

    Science.gov (United States)

    Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne

    2014-01-01

    One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…

  10. Correlation between the Physical Activity Level and Grade Point Averages of Faculty of Education Students

    Science.gov (United States)

    Imdat, Yarim

    2014-01-01

    The aim of the study is to find the correlation that exists between physical activity level and grade point averages of faculty of education students. The subjects consist of 359 (172 females and 187 males) under graduate students To determine the physical activity levels of the students in this research, International Physical Activity…

  11. Correlations between PANCE performance, physician assistant program grade point average, and selection criteria.

    Science.gov (United States)

    Brown, Gina; Imel, Brittany; Nelson, Alyssa; Hale, LaDonna S; Jansen, Nick

    2013-01-01

    The purpose of this study was to examine correlations between first-time Physician Assistant National Certifying Exam (PANCE) scores and pass/fail status, physician assistant (PA) program didactic grade point average (GPA), and specific selection criteria. This retrospective study evaluated graduating classes from 2007, 2008, and 2009 at a single program (N = 119). There was no correlation between PANCE performance and undergraduate grade point average (GPA), science prerequisite GPA, or health care experience. There was a moderate correlation between PANCE pass/fail and where students took science prerequisites (r = 0.27, P = .003) but not with the PANCE score. PANCE scores were correlated with overall PA program GPA (r = 0.67), PA pharmacology grade (r = 0.68), and PA anatomy grade (r = 0.41) but not with PANCE pass/fail. Correlations between selection criteria and PANCE performance were limited, but further research regarding the influence of prerequisite institution type may be warranted and may improve admission decisions. PANCE scores and PA program GPA correlations may guide academic advising and remediation decisions for current students.

  12. Nonlinear correlations in the hydrophobicity and average flexibility along the glycolytic enzymes sequences

    International Nuclear Information System (INIS)

    Ciorsac, Alecu; Craciun, Dana; Ostafe, Vasile; Isvoran, Adriana

    2011-01-01

    Research highlights: → We focus our study on the glycolytic enzymes. → We reveal correlation of hydrophobicity and flexibility along their chains. → We also reveal fractal aspects of the glycolytic enzymes structures and surfaces. → The glycolytic enzyme sequences are not random. → Creation of fractal structures requires the operation of nonlinear dynamics. - Abstract: Nonlinear methods widely used for time series analysis were applied to glycolytic enzyme sequences to derive information concerning the correlation of hydrophobicity and average flexibility along their chains. The 20 sequences of different types of the 10 human glycolytic enzymes were considered as spatial series and were analyzed by spectral analysis, detrended fluctuations analysis and Hurst coefficient calculation. The results agreed that there are both short range and long range correlations of hydrophobicity and average flexibility within investigated sequences, the short range correlations being stronger and indicating that local interactions are the most important for the protein folding. This correlation is also reflected by the fractal nature of the structures of investigated proteins.

  13. Nonlinear correlations in the hydrophobicity and average flexibility along the glycolytic enzymes sequences

    Energy Technology Data Exchange (ETDEWEB)

    Ciorsac, Alecu, E-mail: aleciorsac@yahoo.co [Politehnica University of Timisoara, Department of Physical Education and Sport, 2 P-ta Victoriei, 300006, Timisoara (Romania); Craciun, Dana, E-mail: craciundana@gmail.co [Teacher Training Department, West University of Timisoara, 4 Boulevard V. Pirvan, Timisoara, 300223 (Romania); Ostafe, Vasile, E-mail: vostafe@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania); Isvoran, Adriana, E-mail: aisvoran@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania)

    2011-04-15

    Research highlights: lights: We focus our study on the glycolytic enzymes. We reveal correlation of hydrophobicity and flexibility along their chains. We also reveal fractal aspects of the glycolytic enzymes structures and surfaces. The glycolytic enzyme sequences are not random. Creation of fractal structures requires the operation of nonlinear dynamics. - Abstract: Nonlinear methods widely used for time series analysis were applied to glycolytic enzyme sequences to derive information concerning the correlation of hydrophobicity and average flexibility along their chains. The 20 sequences of different types of the 10 human glycolytic enzymes were considered as spatial series and were analyzed by spectral analysis, detrended fluctuations analysis and Hurst coefficient calculation. The results agreed that there are both short range and long range correlations of hydrophobicity and average flexibility within investigated sequences, the short range correlations being stronger and indicating that local interactions are the most important for the protein folding. This correlation is also reflected by the fractal nature of the structures of investigated proteins.

  14. Choosing the best index for the average score intraclass correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2016-09-01

    The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.

  15. Regional correlations of VS30 averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, David M.; Thompson, Eric M.; Cadet, Héloïse

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (VS30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (VSz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that VSz is systematically larger for a given VSz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating VS30 to VSz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate VS30 from VSz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in logVS30 of ±1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to VS30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that VS30 is correlated with VSz for z as great as 400 m for sites of the KiK-net network, providing some justification for using VS30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  16. Speed Estimation in Geared Wind Turbines Using the Maximum Correlation Coefficient

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Marhadi, Kun S.; Jensen, Bogi Bech

    2015-01-01

    to overcome the above mentioned issues. The high speed stage shaft angular velocity is calculated based on the maximum correlation coefficient between the 1 st gear mesh frequency of the last gearbox stage and a pure sinus tone of known frequency and phase. The proposed algorithm utilizes vibration signals...

  17. Three dimensional winds: A maximum cross-correlation application to elastic lidar data

    Energy Technology Data Exchange (ETDEWEB)

    Buttler, William Tillman [Univ. of Texas, Austin, TX (United States)

    1996-05-01

    Maximum cross-correlation techniques have been used with satellite data to estimate winds and sea surface velocities for several years. Los Alamos National Laboratory (LANL) is currently using a variation of the basic maximum cross-correlation technique, coupled with a deterministic application of a vector median filter, to measure transverse winds as a function of range and altitude from incoherent elastic backscatter lidar (light detection and ranging) data taken throughout large volumes within the atmospheric boundary layer. Hourly representations of three-dimensional wind fields, derived from elastic lidar data taken during an air-quality study performed in a region of complex terrain near Sunland Park, New Mexico, are presented and compared with results from an Environmental Protection Agency (EPA) approved laser doppler velocimeter. The wind fields showed persistent large scale eddies as well as general terrain-following winds in the Rio Grande valley.

  18. Ultrasonic correlator versus signal averager as a signal to noise enhancement instrument

    Science.gov (United States)

    Kishoni, Doron; Pietsch, Benjamin E.

    1989-01-01

    Ultrasonic inspection of thick and attenuating materials is hampered by the reduced amplitudes of the propagated waves to a degree that the noise is too high to enable meaningful interpretation of the data. In order to overcome the low Signal to Noise (S/N) ratio, a correlation technique has been developed. In this method, a continuous pseudo-random pattern generated digitally is transmitted and detected by piezoelectric transducers. A correlation is performed in the instrument between the received signal and a variable delayed image of the transmitted one. The result is shown to be proportional to the impulse response of the investigated material, analogous to a signal received from a pulsed system, with an improved S/N ratio. The degree of S/N enhancement depends on the sweep rate. This paper describes the correlator, and compares it to the method of enhancing S/N ratio by averaging the signals. The similarities and differences between the two are highlighted and the potential advantage of the correlator system is explained.

  19. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  20. The Maximum Cross-Correlation approach to detecting translational motions from sequential remote-sensing images

    Science.gov (United States)

    Gao, J.; Lythe, M. B.

    1996-06-01

    This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.

  1. Strong Solar Control of Infrared Aurora on Jupiter: Correlation Since the Last Solar Maximum

    Science.gov (United States)

    Kostiuk, T.; Livengood, T. A.; Hewagama, T.

    2009-01-01

    Polar aurorae in Jupiter's atmosphere radiate throughout the electromagnetic spectrum from X ray through mid-infrared (mid-IR, 5 - 20 micron wavelength). Voyager IRIS data and ground-based spectroscopic measurements of Jupiter's northern mid-IR aurora, acquired since 1982, reveal a correlation between auroral brightness and solar activity that has not been observed in Jovian aurora at other wavelengths. Over nearly three solar cycles, Jupiter auroral ethane emission brightness and solar 10.7 cm radio flux and sunspot number are positively correlated with high confidence. Ethane line emission intensity varies over tenfold between low and high solar activity periods. Detailed measurements have been made using the GSFC HIPWAC spectrometer at the NASA IRTF since the last solar maximum, following the mid-IR emission through the declining phase toward solar minimum. An even more convincing correlation with solar activity is evident in these data. Current analyses of these results will be described, including planned measurements on polar ethane line emission scheduled through the rise of the next solar maximum beginning in 2009, with a steep gradient to a maximum in 2012. This work is relevant to the Juno mission and to the development of the Europa Jupiter System Mission. Results of observations at the Infrared Telescope Facility (IRTF) operated by the University of Hawaii under Cooperative Agreement no. NCC5-538 with the National Aeronautics and Space Administration, Science Mission Directorate, Planetary Astronomy Program. This work was supported by the NASA Planetary Astronomy Program.

  2. General theory for calculating disorder-averaged Green's function correlators within the coherent potential approximation

    Science.gov (United States)

    Zhou, Chenyi; Guo, Hong

    2017-01-01

    We report a diagrammatic method to solve the general problem of calculating configurationally averaged Green's function correlators that appear in quantum transport theory for nanostructures containing disorder. The theory treats both equilibrium and nonequilibrium quantum statistics on an equal footing. Since random impurity scattering is a problem that cannot be solved exactly in a perturbative approach, we combine our diagrammatic method with the coherent potential approximation (CPA) so that a reliable closed-form solution can be obtained. Our theory not only ensures the internal consistency of the diagrams derived at different levels of the correlators but also satisfies a set of Ward-like identities that corroborate the conserving consistency of transport calculations within the formalism. The theory is applied to calculate the quantum transport properties such as average ac conductance and transmission moments of a disordered tight-binding model, and results are numerically verified to high precision by comparing to the exact solutions obtained from enumerating all possible disorder configurations. Our formalism can be employed to predict transport properties of a wide variety of physical systems where disorder scattering is important.

  3. Stone comminution correlates with the average peak pressure incident on a stone during shock wave lithotripsy.

    Science.gov (United States)

    Smith, N; Zhong, P

    2012-10-11

    To investigate the roles of lithotripter shock wave (LSW) parameters and cavitation in stone comminution, a series of in vitro fragmentation experiments have been conducted in water and 1,3-butanediol (a cavitation-suppressive fluid) at a variety of acoustic field positions of an electromagnetic shock wave lithotripter. Using field mapping data and integrated parameters averaged over a circular stone holder area (R(h)=7 mm), close logarithmic correlations between the average peak pressure (P(+(avg))) incident on the stone (D=10 mm BegoStone) and comminution efficiency after 500 and 1000 shocks have been identified. Moreover, the correlations have demonstrated distinctive thresholds in P(+(avg)) (5.3 MPa and 7.6 MPa for soft and hard stones, respectively), that are required to initiate stone fragmentation independent of surrounding fluid medium and LSW dose. These observations, should they be confirmed using other shock wave lithotripters, may provide an important field parameter (i.e., P(+(avg))) to guide appropriate application of SWL in clinics, and facilitate device comparison and design improvements in future lithotripters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Characterization of the spatial structure of local functional connectivity using multi-distance average correlation measures.

    Science.gov (United States)

    Macia, Didac; Pujol, Jesus; Blanco-Hinojo, Laura; Martínez-Vilavella, Gerard; Martín-Santos, Rocío; Deus, Joan

    2018-04-24

    There is ample evidence from basic research in neuroscience of the importance of local cortico-cortical networks. Millimetric resolution is achievable with current functional MRI (fMRI) scanners and sequences, and consequently a number of "local" activity similarity measures have been defined to describe patterns of segregation and integration at this spatial scale. We have introduced the use of Iso-Distant local Average Correlation (IDAC), easily defined as the average fMRI temporal correlation of a given voxel with other voxels placed at increasingly separated iso-distant intervals, to characterize the curve of local fMRI signal similarities. IDAC curves can be statistically compared using parametric multivariate statistics. Furthermore, by using RGB color-coding to display jointly IDAC values belonging to three different distance lags, IDAC curves can also be displayed as multi-distance IDAC maps. We applied IDAC analysis to a sample of 41 subjects scanned under two different conditions, a resting state and an auditory-visual continuous stimulation. Multi-distance IDAC mapping was able to discriminate between gross anatomo-functional cortical areas and, moreover, was sensitive to modulation between the two brain conditions in areas known to activate and de-activate during audio-visual tasks. Unlike previous fMRI local similarity measures already in use, our approach draws special attention to the continuous smooth pattern of local functional connectivity.

  5. A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    and space. This paper presents a new estimator (STC-MLE), which incorporates the correlation property. It is an expansion of the maximum likelihood estimator (MLE) developed by Ferrara et al. With the MLE a cross-correlation analysis between consecutive RF-lines on complex form is carried out for a range...... of possible velocities. In the new estimator an additional similarity investigation for each evaluated velocity and the available velocity estimates in a temporal (between frames) and spatial (within frames) neighborhood is performed. An a priori probability density term in the distribution...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...

  6. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    Science.gov (United States)

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  7. Correlation between maximum voluntary contraction and endurance measured by digital palpation and manometry: An observational study

    Directory of Open Access Journals (Sweden)

    Fátima Faní Fitz

    Full Text Available Summary Introduction: Digital palpation and manometry are methods that can provide information regarding maximum voluntary contraction (MVC and endurance of the pelvic floor muscles (PFM, and a strong correlation between these variables can be expected. Objective: To investigate the correlation between MVC and endurance, measured by digital palpation and manometry. Method: Forty-two women, with mean age of 58.1 years (±10.2, and predominant symptoms of stress urinary incontinence (SUI, were included. Examination was firstly conducted by digital palpation and subsequently using a Peritron manometer. MVC was measured using a 0-5 score, based on the Oxford Grading Scale. Endurance was assessed based on the PERFECT scheme. Results: We found a significant positive correlation between the MVC measured by digital palpation and the peak manometric pressure (r=0.579, p<0.001, and between the measurements of the endurance by Peritron manometer and the PERFECT assessment scheme (r=0.559, P<0.001. Conclusion: Our results revealed a positive and significant correlation between the capacity and maintenance of PFM contraction using digital and manometer evaluations in women with predominant symptoms of SUI.

  8. The correlation between physical activity and grade point average for health science graduate students.

    Science.gov (United States)

    Gonzalez, Eugenia C; Hernandez, Erika C; Coltrane, Ambrosia K; Mancera, Jayme M

    2014-01-01

    Researchers have reported positive associations between physical activity and academic achievement. However, a common belief is that improving academic performance comes at the cost of reducing time for and resources spent on extracurricular activities that encourage physical activity. The purpose of this study was to examine the relationship between self-reported physical activity and grade point average (GPA) for health science graduate students. Graduate students in health science programs completed the International Physical Activity Questionnaire and reported their academic progress. Most participants (76%) reported moderate to vigorous physical activity levels that met or exceeded the recommended levels for adults. However, there was no significant correlation between GPA and level of physical activity. Negative findings for this study may be associated with the limited range of GPA scores for graduate students. Future studies need to consider more sensitive measures of cognitive function, as well as the impact of physical activity on occupational balance and health for graduate students in the health fields. Copyright 2014, SLACK Incorporated.

  9. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  10. Correlation between maximum phonetically balanced word recognition score and pure-tone auditory threshold in elder presbycusis patients over 80 years old.

    Science.gov (United States)

    Deng, Xin-Sheng; Ji, Fei; Yang, Shi-Ming

    2014-02-01

    The maximum phonetically balanced word recognition score (PBmax) showed poor correlation with pure-tone thresholds in presbycusis patients older than 80 years. To study the characteristics of monosyllable recognition in presbycusis patients older than 80 years of age. Thirty presbycusis patients older than 80 years were included as the test group (group 80+). Another 30 patients aged 60-80 years were selected as the control group (group 80-) . PBmax was tested by Mandarin monosyllable recognition test materials with the signal level at 30 dB above the averaged thresholds of 0.5, 1, 2, and 4 kHz (4FA) or the maximum comfortable level. The PBmax values of the test group and control group were compared with each other and the correlation between PBmax and predicted maximum speech recognition scores based on 4FA (PBmax-predict) were statistically analyzed. Under the optimal test conditions, the averaged PBmax was (77.3 ± 16.7) % for group 80- and (52.0 ± 25.4) % for group 80+ (p < 0.001). The PBmax of group 80- was significantly correlated with PBmax-predict (Spearman correlation = 0.715, p < 0.001). The score for group 80+ was less statistically correlated with PBmax-predict (Spearman correlation = 0.572, p = 0.001).

  11. Determination of maximum physiologic thyroid uptake and correlation with 24-hour RAI uptake value

    International Nuclear Information System (INIS)

    Duldulao, M.; Obaldo, J.

    2007-01-01

    Full text: In hyperthyroid patients, thyroid uptake values are overestimated, sometimes approaching or exceeding 100%. This is physiologically and mathematically impossible. This study was undertaken to determine the maximum physiologic thyroid uptake value through a proposed simple method using a gamma camera. Methodology: Twenty-two patients (17 females and 5 males), with ages ranging from 19-61 y/o (mean age ± SD; 41 ± 12), with 24-hour uptake value of >50%, clinically hyperthyroid and referred for subsequent radioactive iodine therapy were studied. The computed maximum physiologic thyroid uptake was compared with the 24-hour uptake using the paired Student t-test and evaluated using linear regression analysis. Results: The computed physiologic uptake correlated poorly with the 24-hour uptake value. However, in the male subgroup, there was no statistically significant difference between the two (p=0.77). Linear regression analysis gives the following relationship: physiologic uptake (%) = 77.76 - 0.284 (24-hour RAI uptake value). Conclusion: Provided that proper regions of interest are applied with correct attenuation and background subtraction, determination of physiologic thyroid uptake may be obtained using the proposed method. This simple method may be useful prior to I-131 therapy for hyperthyroidism especially when a single uptake determination is performed. (author)

  12. Self-averaging correlation functions in the mean field theory of spin glasses

    International Nuclear Information System (INIS)

    Mezard, M.; Parisi, G.

    1984-01-01

    In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it

  13. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  14. Verification of average daily maximum permissible concentration of styrene in the atmospheric air of settlements under the results of epidemiological studies of the children’s population

    Directory of Open Access Journals (Sweden)

    М.А. Zemlyanova

    2015-03-01

    Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.

  15. Trading Time with Space - Development of subduction zone parameter database for a maximum magnitude correlation assessment

    Science.gov (United States)

    Schaefer, Andreas; Wenzel, Friedemann

    2017-04-01

    Subduction zones are generally the sources of the earthquakes with the highest magnitudes. Not only in Japan or Chile, but also in Pakistan, the Solomon Islands or for the Lesser Antilles, subduction zones pose a significant hazard for the people. To understand the behavior of subduction zones, especially to identify their capabilities to produce maximum magnitude earthquakes, various physical models have been developed leading to a large number of various datasets, e.g. from geodesy, geomagnetics, structural geology, etc. There have been various studies to utilize this data for the compilation of a subduction zone parameters database, but mostly concentrating on only the major zones. Here, we compile the largest dataset of subduction zone parameters both in parameter diversity but also in the number of considered subduction zones. In total, more than 70 individual sources have been assessed and the aforementioned parametric data have been combined with seismological data and many more sources have been compiled leading to more than 60 individual parameters. Not all parameters have been resolved for each zone, since the data completeness depends on the data availability and quality for each source. In addition, the 3D down-dip geometry of a majority of the subduction zones has been resolved using historical earthquake hypocenter data and centroid moment tensors where available and additionally compared and verified with results from previous studies. With such a database, a statistical study has been undertaken to identify not only correlations between those parameters to estimate a parametric driven way to identify potentials for maximum possible magnitudes, but also to identify similarities between the sources themselves. This identification of similarities leads to a classification system for subduction zones. Here, it could be expected if two sources share enough common characteristics, other characteristics of interest may be similar as well. This concept

  16. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  17. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  18. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  19. Detrending moving-average cross-correlation coefficient: Measuring cross-correlations between non-stationary series

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    Roč. 406 , č. 1 (2014), s. 169-175 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Grant - others:GA ČR(CZ) GAP402/11/0948 Program:GA Institutional support: RVO:67985556 Keywords : correlations * econophysics * non-stationarity Subject RIV: AH - Economics Impact factor: 1.732, year: 2014 http://library.utia.cas.cz/separaty/2014/E/kristoufek-0433529.pdf

  20. Averages of ratios of the Riemann zeta-function and correlations of divisor sums

    Science.gov (United States)

    Conrey, Brian; Keating, Jonathan P.

    2017-10-01

    Nonlinearity has published articles containing a significant number-theoretic component since the journal was first established. We examine one thread, concerning the statistics of the zeros of the Riemann zeta function. We extend this by establishing a connection between the ratios conjecture for the Riemann zeta-function and a conjecture concerning correlations of convolutions of Möbius and divisor functions. Specifically, we prove that the ratios conjecture and an arithmetic correlations conjecture imply the same result. This provides new support for the ratios conjecture, which previously had been motivated by analogy with formulae in random matrix theory and by a heuristic recipe. Our main theorem generalises a recent calculation pertaining to the special case of two-over-two ratios.

  1. Fluxes by eddy correlation over heterogeneous landscape: How shall we apply the Reynolds average?

    Science.gov (United States)

    Dobosy, R.

    2007-12-01

    Top-down estimates of carbon exchange across the earth's surface are implicitly an integral scheme, deriving bulk exchanges over large areas. Bottom-up estimates explicitly integrate the individual components of exchange to derive a bulk value. If these approaches are to be properly compared, their estimates should represent the same quantity. Over heterogeneous landscape, eddy-covariance flux computations from towers or aircraft intended for comparison with top-down approach face a question of the proper definition of the mean or base state, the departures from which yield the fluxes by Reynolds averaging. 1)≠Use a global base state derived over a representative sample of the surface, insensitive to land use. The departure quantities then fail to sum to zero over any subsample representing an individual surface type, violating Reynolds criteria. Yet fluxes derived from such subsamples can be directly composed into a bulk flux, globally satisfying Reynolds criteria. 2)≠Use a different base state for each surface type. satisfying Reynolds criteria individually. Then some of the flux may get missed if a surface's characteristics significantly bias its base state. Base state≠(2) is natural for tower samples. Base state≠(1) is natural for airborne samples over heterogeneous landscape, especially in patches smaller than an appropriate averaging length. It appears (1) incorporates a more realistic sample of the flux, though desirably there would be no practical difference between the two schemes. The schemes are related by the expression w¯*a*)C - w¯'a¯')C = w¯'ã¯)C+ wtilde ¯a¯')C+ wtilde ¯ã¯)C Here w is vertical motion, and a is some scalar, such as CO2. The star denotes departure from the global base state≠(1), and the prime from the base state≠(2), defined only over surface class≠C. The overbar with round bracket denotes average over samples drawn from class≠C, determined by footprint model. Thus a¯')C = 0 but a¯*)C ≠ 0 in general. The

  2. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  3. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    Science.gov (United States)

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  4. Eigenstructures of MIMO Fading Channel Correlation Matrices and Optimum Linear Precoding Designs for Maximum Ergodic Capacity

    Directory of Open Access Journals (Sweden)

    Hamid Reza Bahrami

    2007-01-01

    Full Text Available The ergodic capacity of MIMO frequency-flat and -selective channels depends greatly on the eigenvalue distribution of spatial correlation matrices. Knowing the eigenstructure of correlation matrices at the transmitter is very important to enhance the capacity of the system. This fact becomes of great importance in MIMO wireless systems where because of the fast changing nature of the underlying channel, full channel knowledge is difficult to obtain at the transmitter. In this paper, we first investigate the effect of eigenvalues distribution of spatial correlation matrices on the capacity of frequency-flat and -selective channels. Next, we introduce a practical scheme known as linear precoding that can enhance the ergodic capacity of the channel by changing the eigenstructure of the channel by applying a linear transformation. We derive the structures of precoders using eigenvalue decomposition and linear algebra techniques in both cases and show their similarities from an algebraic point of view. Simulations show the ability of this technique to change the eigenstructure of the channel, and hence enhance the ergodic capacity considerably.

  5. An overview of the report: Correlation between carcinogenic potency and the maximum tolerated dose: Implications for risk assessment

    International Nuclear Information System (INIS)

    Krewski, D.; Gaylor, D.W.; Soms, A.P.; Szyszkowicz, M.

    1993-01-01

    Current practice in carcinogen bioassay calls for exposure of experimental animals at doses up to and including the maximum tolerated dose (MTD). Such studies have been used to compute measures of carcinogenic potency such as the TD 50 as well as unit risk factors such as q 1 for predicting low-dose risks. Recent studies have indicated that these measures of carcinogenic potency are highly correlated with the MTD. Carcinogenic potency has also been shown to be correlated with indicators of mutagenicity and toxicity. Correlation of the MTDs for rats and mice implies a corresponding correlation in TD 50 values for these two species. The implications of these results for cancer risk assessment are examined in light of the large variation in potency among chemicals known to induce tumors in rodents. 119 refs., 2 figs., 4 tabs

  6. Regional correlations of V s30 and velocities averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, D.M.; Thompson, E.M.; Cadet, H.

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (V S30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (V Sz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that V S30 is systematically larger for a given V Sz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating V S30 to V Sz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate V S30 from V Sz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in log V S30 of 1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to V S30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that V S30 is correlated with V Sz for z as great as 400 m for sites of the KiK-net network, providing some justification for using V S30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  7. Correlation of patient maximum skin doses in cardiac procedures with various dose indicators

    International Nuclear Information System (INIS)

    Domienik, J.; Papierz, S.; Jankowski, J.; Peruga, J.Z.; Werduch, A.; Religa, W.

    2008-01-01

    In most countries of European Union, legislation requires the determination of the total skin dose received by patients during interventional procedures in order to prevent deterministic damages. Various dose indicators like dose-area product (DAP), cumulative dose (CD) and entrance dose at the patient plane (EFD) are used for patient dosimetry purposes in clinical practice. This study aimed at relating those dose indicators with doses ascribed to the most irradiated areas of the patient skin usually expressed in terms of local maximal skin dose (MSD). The study was performed in two different facilities for two most common cardiac procedures coronary angiography (CA) and percutaneous coronary interventions (PCI). For CA procedures, the registered values of fluoroscopy time, total DAP and MSD were in the range (0.7-27.3) min, (16-317) Gy cm 2 and (43-1507) mGy, respectively, and for interventions, accordingly (2.1-43.6) min, (17-425) Gy cm 2 , (71-1555) mGy. Moreover, for CA procedures, CD and EFD were in the ranges (295-4689) mGy and (121-1768) mGy and for PCI (267-6524) mGy and (68-2279) mGy, respectively. No general and satisfactory correlation was found for safe estimation of MSD. However, results show that the best dose indicator which might serve for rough, preliminary estimation is DAP value. In the study, the appropriate trigger levels were proposed for both facilities. (authors)

  8. A comparison of the Angstrom-type correlations and the estimation of monthly average daily global irradiation

    International Nuclear Information System (INIS)

    Jain, S.; Jain, P.C.

    1985-12-01

    Linear regression analysis of the monthly average daily global irradiation and the sunshine duration data of 8 Zambian locations has been performed using the least square technique. Good correlation (r>0.95) is obtained in all the cases showing that the Angstrom equation is valid for Zambian locations. The values of the correlation parameters thus obtained show substantial unsystematic scatter. The analysis was repeated after incorporating the effects of (i) multiple reflections of radiation between the ground and the atmosphere, and (ii) not burning of the sunshine recorder chart, into the Angstrom equation. The surface albedo measurements at Lusaka were used. The scatter in the correlation parameters was investigated by graphical representation, by regression analysis of the data of the individual stations as well as the combined data of the 8 stations. The results show that the incorporation of none of the two effects reduces the scatter significantly. A single linear equation obtained from the regression analysis of the combined data of the 8 stations is found to be appropriate for estimating the global irradiation over Zambian locations with reasonable accuracy from the sunshine duration data. (author)

  9. [In patients with Graves' disease signal-averaged P wave duration positively correlates with the degree of thyrotoxicosis].

    Science.gov (United States)

    Czarkowski, Marek; Oreziak, Artur; Radomski, Dariusz

    2006-04-01

    Coexistence of the goitre, proptosis and palpitations was observed in XIX century for the first time. Sinus tachyarytmias and atrial fibrillation are typical cardiac symptoms of hyperthyroidism. Atrial fibrillation occurs more often in patients with toxic goiter than in young patients with Grave's disease. These findings suggest that causes of atrial fibrillation might be multifactorial in the elderly. The aims of our study were to evaluate correlations between the parameters of atrial signal averaged ECG (SAECG) and the serum concentration of thyroid free hormones. 25 patient with untreated Grave's disease (G-B) (age 29,6 +/- 9,0 y.o.) and 26 control patients (age 29,3 +/- 6,9 y.o.) were enrolled to our study. None of them had history of atrial fibrillation what was confirmed by 24-hour ECG Holter monitoring. The serum fT3, fT4, TSH were determined in the venous blood by the immunoenzymatic method. Atrial SAECG recording with filtration by zero phase Butterworth filter (45-150 Hz) was done in all subjects. The duration of atrial vector magnitude (hfP) and root meat square of terminal 20ms of atrial vector magnitude (RMS20) were analysed. There were no significant differences in values of SAECG parameters (hfP, RMS20) between investigated groups. The positive correlation between hfP and serum fT3 concentration in group G-B was observed (Spearman's correlation coefficient R = 0.462, p Grave's disease depends not only on hyperthyroidism but on serum concentration of fT3 also.

  10. Analysis for average heat transfer empirical correlation of natural convection on the concentric vertical cylinder modelling of APWR

    International Nuclear Information System (INIS)

    Daddy Setyawan

    2011-01-01

    There are several passive safety systems on APWR reactor design. One of the passive safety system is the cooling system with natural circulation air on the surface of concentric vertical cylinder containment wall. Since the natural circulation air performance in the Passive Containment Cooling System (PCCS) application is related to safety, the cooling characteristics of natural circulation air on concentric vertical cylinder containment wall should be studied experimentally. This paper focuses on the experimental study of the heat transfer coefficient of natural circulation air with heat flux level varied on the characteristics of APWR concentric vertical cylinder containment wall. The procedure of this experimental study is composed of 4 stages as follows: the design of APWR containment with scaling 1:40, the assembling of APWR containment with its instrumentation, calibration and experimentation. The experimentation was conducted in the transient and steady-state with the variation of heat flux, from 119 W/m 2 until 575 W/m 2 . From The experimentation result obtained average heat transfer empirical correlation of natural convection Nu L = 0,008(Ra * L ) 0,68 for the concentric vertical cylinder geometry modelling of APWR. (author)

  11. Remote Sensing of Three-dimensional Winds with Elastic Lidar: Explanation of Maximum Cross-correlation Method

    Science.gov (United States)

    Buttler, William T.; Soriano, Cecilia; Baldasano, Jose M.; Nickel, George H.

    Maximum cross-correlation provides a method toremotely de-ter-mine high-lyre-solved three-dimensional fields of horizontalwinds with e-las-tic li-darthrough-out large volumes of the planetaryboundary layer (PBL). This paperdetails the technique and shows comparisonsbetween elastic lidar winds, remotelysensed laser Doppler velocimeter (LDV) windprofiles, and radiosonde winds.Radiosonde wind data were acquired at Barcelona,Spain, during the BarcelonaAir-Quality Initiative (1992), and the LDVwind data were acquired at SunlandPark, New Mexico during the 1994 Border AreaAir-Quality Study. Comparisonsshow good agreement between the differentinstruments, and demonstrate the methoduseful for air pollution management at thelocal/regional scale. Elastic lidar windscould thus offer insight into aerosol andpollution transport within the PBL. Lidarwind fields might also be used to nudge orimprove initialization and evaluation ofatmospheric meteorological models.

  12. Optimisation of sea surface current retrieval using a maximum cross correlation technique on modelled sea surface temperature

    Science.gov (United States)

    Heuzé, Céline; Eriksson, Leif; Carvajal, Gisela

    2017-04-01

    Using sea surface temperature from satellite images to retrieve sea surface currents is not a new idea, but so far its operational near-real time implementation has not been possible. Validation studies are too region-specific or uncertain, due to the errors induced by the images themselves. Moreover, the sensitivity of the most common retrieval method, the maximum cross correlation, to the three parameters that have to be set is unknown. Using model outputs instead of satellite images, biases induced by this method are assessed here, for four different seas of Western Europe, and the best of nine settings and eight temporal resolutions are determined. For all regions, tracking a small 5 km pattern from the first image over a large 30 km region around its original location on a second image, separated from the first image by 6 to 9 hours returned the most accurate results. Moreover, for all regions, the problem is not inaccurate results but missing results, where the velocity is too low to be picked by the retrieval. The results are consistent both with limitations caused by ocean surface current dynamics and with the available satellite technology, indicating that automated sea surface current retrieval from sea surface temperature images is feasible now, for search and rescue operations, pollution confinement or even for more energy efficient and comfortable ship navigation.

  13. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system.

    Science.gov (United States)

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R

    2013-07-01

    The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, pchest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.

  14. A Correlation Between the Intrinsic Brightness and Average Decay Rate of Gamma-Ray Burst X-Ray Afterglow Light Curves

    Science.gov (United States)

    Racusin, J. L.; Oates, S. R.; De Pasquale, M.; Kocevski, D.

    2016-01-01

    We present a correlation between the average temporal decay (alpha X,avg, greater than 200 s) and early-time luminosity (LX,200 s) of X-ray afterglows of gamma-ray bursts as observed by the Swift X-ray Telescope. Both quantities are measured relative to a rest-frame time of 200 s after the gamma-ray trigger. The luminosity â€" average decay correlation does not depend on specific temporal behavior and contains one scale-independent quantity minimizing the role of selection effects. This is a complementary correlation to that discovered by Oates et al. in the optical light curves observed by the Swift Ultraviolet Optical Telescope. The correlation indicates that, on average, more luminous X-ray afterglows decay faster than less luminous ones, indicating some relative mechanism for energy dissipation. The X-ray and optical correlations are entirely consistent once corrections are applied and contamination is removed. We explore the possible biases introduced by different light-curve morphologies and observational selection effects, and how either geometrical effects or intrinsic properties of the central engine and jet could explain the observed correlation.

  15. Correlation between average tissue depth data and quantitative accuracy of forensic craniofacial reconstructions measured by geometric surface comparison method.

    Science.gov (United States)

    Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi

    2015-05-01

    Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.

  16. Uncertainties and correlations for the 56Fe damage cross sections and spectra averaged quantities based on TENDL-TMC

    International Nuclear Information System (INIS)

    Simakov, S.P.; Konobeyev, A.Yu.; Koning, A.

    2016-01-01

    The goal of this work is a calculation of the covariance matrices for the physical quantities used to characterize the neutron induced radiation damage in the materials. Such quantities usually encompass: the charged particles kinetic energy deposition KERMA (locally deposited nuclear heating), damage energy (to calculate then the number of displaced atoms) and gas production cross sections [(n,xα), (n,xt), (n,xp) … to calculate then transmuting of target nuclei to gases]. The uncertainties and energy-energy or reaction-reaction correlations for such quantities were not assessed so far, whereas the covariances for many underlying cross sections are often presented in the evaluated data libraries. Due to the dependence of damage quantities on many reactions channels, on both total and differential cross sections, and in particular on the energy distribution of reaction recoils, the evaluation of uncertainty is not straightforward. To reach a goal, we used the method based on idea of Total Monte Carlo application to the Nuclear Data. This report summarises the current results for evaluation, validation and representation in the ENDF-6 format of the radiation damage covariances for n + 56 Fe from thermal energy up to 20 MeV. This study was motivated by the IAEA Coordinated Research Project ''Primary Radiation Damage Cross Sections'' and by present dedicated Technical Meeting “Nuclear Reaction Data and Uncertainties for Radiation Damage”

  17. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and projected ranges of He+ ions in metals: V

    International Nuclear Information System (INIS)

    Kaminsky, M.; Das, S.K.; Fenske, G.

    1976-01-01

    In these experiments a systematic study of the correlation of the skin thickness measured directly by scanning electron microscopy with both the calculated projected-range values and the maximum in the damage-energy distribution has been conducted for a broad helium-ion energy range (100 keV-1000 keV in polycrystalline vanadium. (Auth.)

  18. Global correlations between maximum magnitudes of subduction zone interface thrust earthquakes and physical parameters of subduction zones

    NARCIS (Netherlands)

    Schellart, W. P.; Rawlinson, N.

    2013-01-01

    The maximum earthquake magnitude recorded for subduction zone plate boundaries varies considerably on Earth, with some subduction zone segments producing giant subduction zone thrust earthquakes (e.g. Chile, Alaska, Sumatra-Andaman, Japan) and others producing relatively small earthquakes (e.g.

  19. Consistency and asymptotic normality of maximum likelihood estimators of a multiplicative time-varying smooth transition correlation GARCH model

    DEFF Research Database (Denmark)

    Silvennoinen, Annestiina; Terasvirta, Timo

    A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations...

  20. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  1. CORRELATION BETWEEN PATHOLOGY AND EXCESS OF MAXIMUM CONCENTRATION LIMIT OF POLLUTANTS IN THE ENVIRONMENT OF THE REPUBLIC OF DAGESTAN

    Directory of Open Access Journals (Sweden)

    G. M. Abdurakhmanov

    2013-01-01

    Full Text Available Abstract. Statistical data from "Indicators of health status of the Republic of Dagestan" for 1999 - 2010 years are presented in the work. The aim of this work was to identify a cause-effect correlation between non-communicable diseases (ischemic heart disease, neuropsychiatric disease, endemic goiter, diabetes, congenital anomalies and environmental factors in the Republic of Dagestan.Statistical data processing was carried out using the software package Statistica, Microsoft Excel. The Spearman rank correlation coefficient (ρ was used for identify of correlation between indicators of environmental quality and health of population.Moderate positive correlation is observed between the development of pathology and excess of concentrations of contaminants in drinking water sources. Direct correlations are founded between development of the studied pathologies and excess of concentrations of heavy metals and their mobile forms in soils of the region. Direct correlation is found between excess of concentrations of heavy metals in the pasture vegetation (factorial character and the morbidity of the population (effective character.

  2. Average correlation clustering algorithm (ACCA) for grouping of co-regulated genes with similar pattern of variation in their expression values.

    Science.gov (United States)

    Bhattacharya, Anindya; De, Rajat K

    2010-08-01

    Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to

  3. Release of Volatiles During North Atlantic Flood Basalt Volcanism and Correlation to the Paleocene-Eocene Thermal Maximum

    Science.gov (United States)

    Pedersen, J. M.; Tegner, C.; Kent, A. J.; Ulrich, T.

    2017-12-01

    The opening of the North Atlantic Ocean between Greenland and Norway during the lower Tertiary led to intense flood basalt volcanism and the emplacement of the North Atlantic Igneous Province (NAIP). The volcanism is temporally overlapping with the Paleocene-Eocene Thermal Maximum (PETM), but ash stratigraphy and geochronology suggests that the main flood basalt sequence in East Greenland postdates the PETM. Significant environmental changes during the PETM have been attributed to the release of CO2 or methane gas due to either extensive melting of hydrates at the ocean floor or as a consequence of the interaction of mantle derived magmas with carbon rich sediments.Estimates suggest that a minimum of 1.8x106 km3 of basaltic lava erupted during North Atlantic flood basalt volcanism. Based on measurements of melt inclusions from the flood basalts our preliminary calculations suggest that approximately 2300 Gt of SO2 and 600 Gt of HCl were released into the atmosphere. Calculated yearly fluxes approach 23 Mt/y SO2 and 6 Mt/y HCl. These estimates are regarded as conservative.The S released into to the atmosphere during flood basalt volcanism can form acid aerosols that absorb and reflect solar radiation, causing an effective cooling effect. The climatic effects of the release of Cl into the atmosphere are not well constrained, but may be an important factor for extinction scenarios due to destruction of the ozone layer.The climatic changes due to the release of S and Cl in these amounts, and for periods extending for hundred thousand of years, although not yet fully constrained are likely to be significant. One consequence of the North Atlantic flood basalt volcanism may have been the initiation of global cooling to end the PETM.

  4. [Correlation and concordance between the national test of medicine (ENAM) and the grade point average (GPA): analysis of the peruvian experience in the period 2007 - 2009].

    Science.gov (United States)

    Huamaní, Charles; Gutiérrez, César; Mezones-Holguín, Edward

    2011-03-01

    To evaluate the correlation and concordance between the 'Peruvian National Exam of Medicine' (ENAM) and the Mean Grade Point Average (GPA) in recently graduated medical students in the period 2007 to 2009. We carried out a secondary data analysis, using the records of the physicians applying to the Rural and Urban Marginal Service in Health of Peru (SERUMS) processes for the years 2008 to 2010. We extracted from these registers, the grades obtained in the ENAM and GPA. We performed a descriptive analysis using medians and 1st and 3rd quartiles (q1/q3); we calculated the correlation between both scores using the Spearman correlation coefficient, additionally, we conducted a lineal regression analysis, and the concordance was measured using the Bland and Altman coefficient. A total of 6 117 physicians were included, the overall median for the GPA was 13.4 (12.7/14.2) and for the ENAM was 11.6 (10.2/13.0).Of the total assessed, 36.8% failed the TEST. We observed an increase in annual median of ENAM scores, with the consequent decrease in the difference between both grades. The correlation between ENAM and PPU is direct and moderate (0.582), independent from the year, type of university management (Public or Private) and location. However, the concordance between both ratings is regular, with a global coefficient of 0.272 (CI 95%: 0.260 to 0.284). Independently of the year, location or type of university management, there is a moderate correlation between the ENAM and the PPU; however, there is only a regular concordance between both grades.

  5. FDG-PET/CT and diffusion-weighted imaging for resected lung cancer: correlation of maximum standardized uptake value and apparent diffusion coefficient value with prognostic factors.

    Science.gov (United States)

    Usuda, Katsuo; Funasaki, Aika; Sekimura, Atsushi; Motono, Nozomu; Matoba, Munetaka; Doai, Mariko; Yamada, Sohsuke; Ueda, Yoshimichi; Uramoto, Hidetaka

    2018-04-09

    Diffusion-weighted magnetic resonance imaging (DWI) is useful for detecting malignant tumors and the assessment of lymph nodes, as FDG-PET/CT is. But it is not clear how DWI influences the prognosis of lung cancer patients. The focus of this study is to evaluate the correlations between maximum standardized uptake value (SUVmax) of FDG-PET/CT and apparent diffusion coefficient (ADC) value of DWI with known prognostic factors in resected lung cancer. A total of 227 patients with resected lung cancers were enrolled in this study. FEG-PET/CT and DWI were performed in each patient before surgery. There were 168 patients with adenocarcinoma, 44 patients with squamous cell carcinoma, and 15 patients with other cell types. SUVmax was a factor that was correlated to T factor, N factor, or cell differentiation. ADC of lung cancer was a factor that was not correlated to T factor, or N factor. There was a significantly weak inverse relationship between SUVmax and ADC (Correlation coefficient r = - 0.227). In analysis of survival, there were significant differences between the categories of sex, age, pT factor, pN factor, cell differentiation, cell type, and SUVmax. Univariate analysis revealed that SUVmax, pN factor, age, cell differentiation, cell type, sex, and pT factor were significant factors. Multivariate analysis revealed that SUVmax and pN factor were independent significant prognostic factors. SUVmax was a significant prognostic factor that is correlated to T factor, N factor, or cell differentiation, but ADC was not. SUVmax may be more useful for predicting the prognosis of lung cancer than ADC values.

  6. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  7. Ethanol production and maximum cell growth are highly correlated with membrane lipid composition during fermentation as determined by lipidomic analysis of 22 Saccharomyces cerevisiae strains.

    Science.gov (United States)

    Henderson, Clark M; Lozada-Contreras, Michelle; Jiranek, Vladimir; Longo, Marjorie L; Block, David E

    2013-01-01

    Optimizing ethanol yield during fermentation is important for efficient production of fuel alcohol, as well as wine and other alcoholic beverages. However, increasing ethanol concentrations during fermentation can create problems that result in arrested or sluggish sugar-to-ethanol conversion. The fundamental cellular basis for these problem fermentations, however, is not well understood. Small-scale fermentations were performed in a synthetic grape must using 22 industrial Saccharomyces cerevisiae strains (primarily wine strains) with various degrees of ethanol tolerance to assess the correlation between lipid composition and fermentation kinetic parameters. Lipids were extracted at several fermentation time points representing different growth phases of the yeast to quantitatively analyze phospholipids and ergosterol utilizing atmospheric pressure ionization-mass spectrometry methods. Lipid profiling of individual fermentations indicated that yeast lipid class profiles do not shift dramatically in composition over the course of fermentation. Multivariate statistical analysis of the data was performed using partial least-squares linear regression modeling to correlate lipid composition data with fermentation kinetic data. The results indicate a strong correlation (R(2) = 0.91) between the overall lipid composition and the final ethanol concentration (wt/wt), an indicator of strain ethanol tolerance. One potential component of ethanol tolerance, the maximum yeast cell concentration, was also found to be a strong function of lipid composition (R(2) = 0.97). Specifically, strains unable to complete fermentation were associated with high phosphatidylinositol levels early in fermentation. Yeast strains that achieved the highest cell densities and ethanol concentrations were positively correlated with phosphatidylcholine species similar to those known to decrease the perturbing effects of ethanol in model membrane systems.

  8. Interference Cancellation Technique Based on Discovery of Spreading Codes of Interference Signals and Maximum Correlation Detection for DS-CDMA System

    Science.gov (United States)

    Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki

    This paper presents a novel interference cancellation (IC) scheme for both synchronous and asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless channels. In the DS-CDMA system, the multiple access interference (MAI) and the near-far problem (NFP) are the two factors which reduce the capacity of the system. In this paper, we propose a new algorithm that is able to detect all interference signals as an individual MAI signal by maximum correlation detection. It is based on the discovery of all the unknowing spreading codes of the interference signals. Then, all possible MAI patterns so called replicas are generated as a summation of interference signals. And the true MAI pattern is found by taking correlation between the received signal and the replicas. Moreover, the receiver executes MAI cancellation in a successive manner, removing all interference signals by single-stage. Numerical results will show that the proposed IC strategy, which alleviates the detrimental effect of the MAI and the near-far problem, can significantly improve the system performance. Especially, we can obtain almost the same receiving characteristics as in the absense of interference for asynchrnous system when received powers are equal. Also, the same performances can be seen under any received power state for synchronous system.

  9. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere

    Science.gov (United States)

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-01-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  10. Crustal seismicity and the earthquake catalog maximum moment magnitude (Mcmax) in stable continental regions (SCRs): Correlation with the seismic velocity of the lithosphere

    Science.gov (United States)

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-12-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  11. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  12. The Last Glacial Maximum in the Northern European loess belt: Correlations between loess-paleosol sequences and the Dehner Maar core (Eifel Mountains)

    Science.gov (United States)

    Zens, Joerg; Krauß, Lydia; Römer, Wolfgang; Klasen, Nicole; Pirson, Stéphane; Schulte, Philipp; Zeeden, Christian; Sirocko, Frank; Lehmkuhl, Frank

    2016-04-01

    The D1 project of the CRC 806 "Our way to Europe" focusses on Central Europe as a destination of modern human dispersal out of Africa. The paleo-environmental conditions along the migration areas are reconstructed by loess-paleosol sequences and lacustrine sediments. Stratigraphy and luminescence dating provide the chronological framework for the correlation of grain size and geochemical data to large-scale climate proxies like isotope ratios and dust content of Greenland ice cores. The reliability of correlations is improved by the development of precise age models of specific marker beds. In this study, we focus on the (terrestrial) Last Glacial Maximum of the Weichselian Upper Pleniglacial which is supposed to be dominated by high wind speeds and an increasing aridity. Especially in the Lower Rhine Embayment (LRE), this period is linked to an extensive erosion event. The disconformity is followed by an intensive cryosol formation. In order to support the stratigraphical observations from the field, luminescence dating and grain size analysis were applied on three loess-paleosol sequences along the northern European loess belt to develop a more reliable chronology and to reconstruct paleo-environmental dynamics. The loess sections were compared to newest results from heavy mineral and grain size analysis from the Dehner Maar core (Eifel Mountains) and correlated to NGRIP records. Volcanic minerals can be found in the Dehner Maar core from a visible tephra layer at 27.8 ka up to ~25 ka. They can be correlated to the Eltville Tephra found in loess section. New quartz luminescence ages from Romont (Belgium) surrounding the tephra dated the deposition between 25.0 + 2.3 ka and 25.8 + 2.4 ka. In the following, heavy minerals show an increasing importance of strong easterly winds during the second Greenland dust peak (~24 ka b2k) correlating with an extensive erosion event in the LRE. Luminescence dating on quartz bracketing the following soil formation yielded ages of

  13. Evaluation of adaptation to visually induced motion sickness based on the maximum cross-correlation between pulse transmission time and heart rate

    Directory of Open Access Journals (Sweden)

    Chiba Shigeru

    2007-09-01

    Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.

  14. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    Science.gov (United States)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a

  15. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  16. Comparative evaluation of average glandular dose and breast cancer detection between single-view digital breast tomosynthesis (DBT) plus single-view digital mammography (DM) and two-view DM: correlation with breast thickness and density

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Sung Ui; Chang, Jung Min; Bae, Min Sun; Lee, Su Hyun; Cho, Nariya; Seo, Mirinae; Kim, Won Hwa; Moon, Woo Kyung [Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of)

    2015-01-15

    To compare the average glandular dose (AGD) and diagnostic performance of mediolateral oblique (MLO) digital breast tomosynthesis (DBT) plus cranio-caudal (CC) digital mammography (DM) with two-view DM, and to evaluate the correlation of AGD with breast thickness and density. MLO and CC DM and DBT images of both breasts were obtained in 149 subjects. AGDs of DBT and DM per exposure were recorded, and their correlation with breast thickness and density were evaluated. Paired data of MLO DBT plus CC DM and two-view DM were reviewed for presence of malignancy in a jack-knife alternative free-response ROC (JAFROC) method. The AGDs of both DBT and DM, and differences in AGD between DBT and DM (ΔAGD), were correlated with breast thickness and density. The average JAFROC figure of merit (FOM) was significantly higher on the combined technique than two-view DM (P = 0.005). In dense breasts, the FOM and sensitivity of the combined technique was higher than that of two-view DM (P = 0.003) with small ΔAGD. MLO DBT plus CC DM provided higher diagnostic performance than two-view DM in dense breasts with a small increase in AGD. (orig.)

  17. Comparative evaluation of average glandular dose and breast cancer detection between single-view digital breast tomosynthesis (DBT) plus single-view digital mammography (DM) and two-view DM: correlation with breast thickness and density

    International Nuclear Information System (INIS)

    Shin, Sung Ui; Chang, Jung Min; Bae, Min Sun; Lee, Su Hyun; Cho, Nariya; Seo, Mirinae; Kim, Won Hwa; Moon, Woo Kyung

    2015-01-01

    To compare the average glandular dose (AGD) and diagnostic performance of mediolateral oblique (MLO) digital breast tomosynthesis (DBT) plus cranio-caudal (CC) digital mammography (DM) with two-view DM, and to evaluate the correlation of AGD with breast thickness and density. MLO and CC DM and DBT images of both breasts were obtained in 149 subjects. AGDs of DBT and DM per exposure were recorded, and their correlation with breast thickness and density were evaluated. Paired data of MLO DBT plus CC DM and two-view DM were reviewed for presence of malignancy in a jack-knife alternative free-response ROC (JAFROC) method. The AGDs of both DBT and DM, and differences in AGD between DBT and DM (ΔAGD), were correlated with breast thickness and density. The average JAFROC figure of merit (FOM) was significantly higher on the combined technique than two-view DM (P = 0.005). In dense breasts, the FOM and sensitivity of the combined technique was higher than that of two-view DM (P = 0.003) with small ΔAGD. MLO DBT plus CC DM provided higher diagnostic performance than two-view DM in dense breasts with a small increase in AGD. (orig.)

  18. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  19. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and the projected ranges of He+ ions in metals

    International Nuclear Information System (INIS)

    Das, S.K.; Kaminsky, M.; Fenske, G.

    1976-01-01

    The skin thickness of blisters formed on aluminium by helium-ion irradiation at room temperature for energies from 100 to 1000 keV have been measured. The projected ranges of helium ions in Al for this energy range were calculated using either Brice's formalism (Brice, D.K., 1972, Phys. Rev., vol. A6, 1791) or the one given by Schioett (Schioett, H.E., 1966, K. Danske Vidensk.Selsk., Mat.-Fys. Meddr., vol.35, No.9). For the damage-energy distribution Brice's formalism was used. The measured skin thickness values are smaller than the calculated values of the maxima in the projected range distributions over the entire energy range studied. These results on the ductile metal aluminium are contrasted with the results on relatively brittle refractory metals V and Nb where the measured skin thickness values correlate more closely with the maxima in the projected range probability distributions than with the maxima in the damage-energy distributions. Processes affecting the blister skin fracture and the skin thickness are discussed. (author)

  20. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  1. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  2. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  3. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  4. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  5. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  6. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  7. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  8. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  9. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  10. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  12. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  13. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  14. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  15. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  16. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  17. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  18. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  19. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2013-03-10

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  20. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    International Nuclear Information System (INIS)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R.

    2013-01-01

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  1. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  2. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  3. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  4. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  5. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  6. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  7. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  8. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  9. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  10. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  11. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  12. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  13. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  14. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  15. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  16. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  17. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  18. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  19. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,

  20. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  1. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  2. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  3. Evaluating the maximum patient radiation dose in cardiac interventional procedures

    International Nuclear Information System (INIS)

    Kato, M.; Chida, K.; Sato, T.; Oosaka, H.; Tosa, T.; Kadowaki, K.

    2011-01-01

    Many of the X-ray systems that are used for cardiac interventional radiology provide no way to evaluate the patient maximum skin dose (MSD). The authors report a new method for evaluating the MSD by using the cumulative patient entrance skin dose (ESD), which includes a back-scatter factor and the number of cine-angiography frames during percutaneous coronary intervention (PCI). Four hundred consecutive PCI patients (315 men and 85 women) were studied. The correlation between the cumulative ESD and number of cine-angiography frames was investigated. The irradiation and overlapping fields were verified using dose-mapping software. A good correlation was found between the cumulative ESD and the number of cine-angiography frames. The MSD could be estimated using the proportion of cine-angiography frames used for the main angle of view relative to the total number of cine-angiography frames and multiplying this by the cumulative ESD. The average MSD (3.0±1.9 Gy) was lower than the average cumulative ESD (4.6±2.6 Gy). This method is an easy way to estimate the MSD during PCI. (authors)

  4. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  5. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and projected ranges of helium ions in Nb for the energy range 10 to 1500 keV

    International Nuclear Information System (INIS)

    St-Jacques, R.G.; Martel, J.G.; Terreault, B.; Veilleux, G.; Das, S.K.; Kaminsky, M.; Fenske, G.

    1976-01-01

    The skin thickness of blisters formed on polycrystalline niobium by 4 He + irradiation at room temperature for energies from 15 to 80 keV have been measured. Similar measurements were conducted for 10 keV 4 He + irradiation at 500 0 C to increase blister exfoliation, and thereby allow examination of a larger number of blister skins. For energies smaller than 100 keV the skin thicknesses are compared with the projected range and the damage-energy distributions constructed from moments interpolated from Winterbon's tabulated values. For energies of 10 and 15 keV the projected ranges and damage-energy distributions have also been computed with a Monte Carlo program. For energies larger than 100 keV the projected ranges of 4 He + in Nb were calculated using either Brice's formalism or the one given by Schiott. The thicknesses for 60 and 80 keV, and those reported earlier for 100 to 1500 keV correlate well with calculated projected ranges. For energies lower than 60 keV the measured thicknesses are larger than the calculated ranges

  6. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  7. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  8. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  9. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  10. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  11. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  12. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  13. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  14. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  15. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  16. Maximum entropy prior uncertainty and correlation of statistical economic data

    NARCIS (Netherlands)

    Dias, Rodriques J.F.

    2016-01-01

    Empirical estimates of source statistical economic data such as trade flows, greenhouse gas emissions or employment figures are always subject to uncertainty (stemming from measurement errors or confidentiality) but information concerning that uncertainty is often missing. This paper uses concepts

  17. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  18. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  19. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  20. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  1. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  2. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  3. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  4. 40 CFR 1045.140 - What is my engine's maximum engine power?

    Science.gov (United States)

    2010-07-01

    ...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...

  5. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  6. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  7. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  8. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  9. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  10. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  11. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  12. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  13. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  14. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  15. Step Test: a method for evaluating maximum oxygen consumption to determine the ability kind of work among students of medical emergencies.

    Science.gov (United States)

    Heydari, Payam; Varmazyar, Sakineh; Nikpey, Ahmad; Variani, Ali Safari; Jafarvand, Mojtaba

    2017-03-01

    Maximum oxygen consumption shows the maximum oxygen rate of muscle oxygenation that is acceptable in many cases, to measure the fitness between person and the desired job. Given that medical emergencies are important, and difficult jobs in emergency situations require people with high physical ability and readiness for the job, the aim of this study was to evaluate the maximum oxygen consumption, to determine the ability of work type among students of medical emergencies in Qazvin in 2016. This study was a descriptive - analytical, and in cross-sectional type conducted among 36 volunteer students of medical emergencies in Qazvin in 2016. After necessary coordination for the implementation of the study, participants completed health questionnaires and demographic characteristics and then the participants were evaluated with step tests of American College of Sport Medicine (ACSM). Data analysis was done by SPSS version 18 and U-Mann-Whitney tests, Kruskal-Wallis and Pearson correlation coefficient. Average of maximum oxygen consumption of the participants was estimated 3.15±0.50 liters per minute. 91.7% of medical emergencies students were selected as appropriate in terms of maximum oxygen consumption and thus had the ability to do heavy and too heavy work. Average of maximum oxygen consumption evaluated by the U-Mann-Whitney test and Kruskal-Wallis, had significant relationship with age (p<0.05) and weight groups (p<0.001). There was a significant positive correlation between maximum oxygen consumption with weight and body mass index (p<0.001). The results of this study showed that demographic variables of weight and body mass index are the factors influencing the determination of maximum oxygen consumption, as most of the students had the ability to do heavy, and too heavy work. Therefore, people with ability to do average work are not suitable for medical emergency tasks.

  16. Predicting the start and maximum amplitude of solar cycle 24 using similar phases and a cycle grouping

    International Nuclear Information System (INIS)

    Wang Jialong; Zong Weiguo; Le Guiming; Zhao Haijuan; Tang Yunqiu; Zhang Yang

    2009-01-01

    We find that the solar cycles 9, 11, and 20 are similar to cycle 23 in their respective descending phases. Using this similarity and the observed data of smoothed monthly mean sunspot numbers (SMSNs) available for the descending phase of cycle 23, we make a date calibration for the average time sequence made of the three descending phases of the three cycles, and predict the start of March or April 2008 for cycle 24. For the three cycles, we also find a linear correlation of the length of the descending phase of a cycle with the difference between the maximum epoch of this cycle and that of its next cycle. Using this relationship along with the known relationship between the rise-time and the maximum amplitude of a slowly rising solar cycle, we predict the maximum SMSN of cycle 24 of 100.2 ± 7.5 to appear during the period from May to October 2012. (letters)

  17. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  18. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  19. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  20. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  1. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  2. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  3. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  4. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  5. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  6. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  7. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  8. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  9. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  10. Paralisia unilateral de prega vocal: associação e correlação entre tempos máximos de fonação, posição e ângulo de afastamento Unilateral vocal fold paralysis: association and correlation between maximum phonation time, position and displacement angle

    Directory of Open Access Journals (Sweden)

    Luciane M. Steffen

    2004-08-01

    clinical classification of the VFP as median, paramedian, intermedian, abduction or cadaveric is controversial. AIM: To check association and correlation between Maximum Phonation Time (MPT with position and with the displacement angle of the paralyzed vocal fold (PVF, to measure the distal angle of the PVF in different positions from median line, correlating it with the clinical classification. STUDY DESIGN: Chart review. MATERIAL AND METHOD: Records of 86 PVF individuals were reviewed, videoendoscopic exams were analyzed and a computer program measured the distal angle of the PVF. RESULTS: The MPTs for each position of paralyzed vocal fold have statistical significance only for /z/ in the median position. There is a relationship between the MPT of /i/, /u/ with PVF distal angle. Correlation and association of the displacement angle with clinical position demonstrate statistical significance when the PVF is in abduction. CONCLUSION: By the present study it was impossible to classify positions of the paralyzed vocal fold using either MPT or the displacement angle measurement.

  11. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  12. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  13. MAXIMUM CORONAL MASS EJECTION SPEED AS AN INDICATOR OF SOLAR AND GEOMAGNETIC ACTIVITIES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Gopalswamy, N.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    We investigate the relationship between the monthly averaged maximal speeds of coronal mass ejections (CMEs), international sunspot number (ISSN), and the geomagnetic Dst and Ap indices covering the 1996-2008 time interval (solar cycle 23). Our new findings are as follows. (1) There is a noteworthy relationship between monthly averaged maximum CME speeds and sunspot numbers, Ap and Dst indices. Various peculiarities in the monthly Dst index are correlated better with the fine structures in the CME speed profile than that in the ISSN data. (2) Unlike the sunspot numbers, the CME speed index does not exhibit a double peak maximum. Instead, the CME speed profile peaks during the declining phase of solar cycle 23. Similar to the Ap index, both CME speed and the Dst indices lag behind the sunspot numbers by several months. (3) The CME number shows a double peak similar to that seen in the sunspot numbers. The CME occurrence rate remained very high even near the minimum of the solar cycle 23, when both the sunspot number and the CME average maximum speed were reaching their minimum values. (4) A well-defined peak of the Ap index between 2002 May and 2004 August was co-temporal with the excess of the mid-latitude coronal holes during solar cycle 23. The above findings suggest that the CME speed index may be a useful indicator of both solar and geomagnetic activities. It may have advantages over the sunspot numbers, because it better reflects the intensity of Earth-directed solar eruptions.

  14. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  15. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  16. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  17. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  18. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  19. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  20. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  1. Spatial correlation in precipitation trends in the Brazilian Amazon

    Science.gov (United States)

    Buarque, Diogo Costa; Clarke, Robin T.; Mendes, Carlos Andre Bulhoes

    2010-06-01

    A geostatistical analysis of variables derived from Amazon daily precipitation records (trends in annual precipitation totals, trends in annual maximum precipitation accumulated over 1-5 days, trend in length of dry spell, trend in number of wet days per year) gave results that are consistent with those previously reported. Averaged over the Brazilian Amazon region as a whole, trends in annual maximum precipitations were slightly negative, the trend in the length of dry spell was slightly positive, and the trend in the number of wet days in the year was slightly negative. For trends in annual maximum precipitation accumulated over 1-5 days, spatial correlation between trends was found to extend up to a distance equivalent to at least half a degree of latitude or longitude, with some evidence of anisotropic correlation. Time trends in annual precipitation were found to be spatially correlated up to at least ten degrees of separation, in both W-E and S-N directions. Anisotropic spatial correlation was strongly evident in time trends in length of dry spell with much stronger evidence of spatial correlation in the W-E direction, extending up to at least five degrees of separation, than in the S-N. Because the time trends analyzed are shown to be spatially correlated, it is argued that methods at present widely used to test the statistical significance of climate trends over time lead to erroneous conclusions if spatial correlation is ignored, because records from different sites are assumed to be statistically independent.

  2. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  3. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  4. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  5. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  6. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  7. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  8. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  9. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  10. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  11. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  12. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  13. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  14. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  15. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  16. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  17. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  18. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  19. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  20. Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery

    Science.gov (United States)

    Alam, Mohammad S.; Bhuiyan, Sharif M. A.

    2014-01-01

    In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840

  1. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  2. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  3. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  4. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  5. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  6. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  7. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  8. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  9. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  10. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  11. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  12. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  13. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  14. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  15. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  16. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  17. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  18. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  19. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  20. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  1. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  2. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  3. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  4. Increasing the maximum daily operation time of MNSR reactor by modifying its cooling system

    International Nuclear Information System (INIS)

    Khamis, I.; Hainoun, A.; Al Halbi, W.; Al Isa, S.

    2006-08-01

    thermal-hydraulic natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The model considers detailed description of the thermal and hydraulic aspects of cooling in the core and vessel. In addition, determination of pressure drop was made through an elaborate balancing of the overall pressure drop in the core against the sum of all individual channel pressure drops employing an iterative scheme. Using this model, an accurate estimation of various timely core-averaged hydraulic parameters such as generated power, hydraulic diameters, flow cross area, ... etc. for each one of the ten-fuel circles in the core can be made. Furthermore, distribution of coolant and fuel temperatures, including maximum fuel temperature and its location in the core, can now be determined. Correlation among core-coolant average temperature, reactor power, and core-coolant inlet temperature, during both steady and transient cases, have been established and verified against experimental data. Simulating various operating condition of MNSR, good agreement is obtained for at different power levels. Various schemes of cooling have been investigated for the purpose of assessing potential benefits on the operational characteristics of the syrian MNSR reactor. A detailed thermal hydraulic model for the analysis of MNSR has been developed. The analysis shows that an auxiliary cooling system, for the reactor vessel or installed in the pool which surrounds the lower section of the reactor vessel, will significantly offset the consumption of excess reactivity due to the negative reactivity temperature coefficient. Hence, the maximum operating time of the reactor is extended. The model considers detailed description of the thermal and hydraulic aspects of cooling the core and its surrounding vessel. Natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The suggested 'micro model

  5. The true bladder dose: on average thrice higher than the ICRU reference

    International Nuclear Information System (INIS)

    Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.

    1996-01-01

    The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter

  6. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  7. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  8. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  9. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  10. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  11. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  12. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  13. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  14. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  15. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  16. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  17. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  18. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  19. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  20. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  1. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  2. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  3. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  4. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  5. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  6. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  7. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  8. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  9. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  10. Correlation of diffusion and perfusion MRI with Ki-67 in high-grade meningiomas.

    Science.gov (United States)

    Ginat, Daniel T; Mangla, Rajiv; Yeaney, Gabrielle; Wang, Henry Z

    2010-12-01

    Atypical and anaplastic meningiomas have a greater likelihood of recurrence than benign meningiomas. The risk for recurrence is often estimated using the Ki-67 labeling index. The purpose of this study was to determine the correlation between Ki-67 and regional cerebral blood volume (rCBV) and between Ki-67 and apparent diffusion coefficient (ADC) in atypical and anaplastic meningiomas. A retrospective review of the advanced imaging and immunohistochemical characteristics of atypical and anaplastic meningiomas was performed. The relative minimum ADC, relative maximum rCBV, and specimen Ki-67 index were measured. Pearson's correlation was used to compare these parameters. There were 23 cases with available ADC maps and 20 cases with available rCBV maps. The average Ki-67 among the cases with ADC maps and rCBV maps was 17.6% (range, 5-38%) and 16.7% (range, 3-38%), respectively. The mean minimum ADC ratio was 0.91 (SD, 0.26) and the mean maximum rCBV ratio was 22.5 (SD, 7.9). There was a significant positive correlation between maximum rCBV and Ki-67 (Pearson's correlation, 0.69; p = 0.00038). However, there was no significant correlation between minimum ADC and Ki-67 (Pearson's correlation, -0.051; p = 0.70). Maximum rCBV correlated significantly with Ki-67 in high-grade meningiomas.

  11. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  12. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  13. The Hengill geothermal area, Iceland: Variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G. R.

    1995-04-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area S. Iceland, a dominantly basaltic area. The likely strain rate calculated from thermal and tectonic considerations is 10 -15 s -1, and temperature measurements from four drill sites within the area indicate average, near-surface geothermal gradients of up to 150 °C km -1 throughout the upper 2 km. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ± 50 °C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes located highly accurately by performing a simultaneous inversion for three-dimensional structure and hypocentral parameters. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. Beneath the high-temperature part of the geothermal area, the maximum depth of earthquakes may be as shallow as 4 km. The geothermal gradient below drilling depths in various parts of the area ranges from 84 ± 9 °Ckm -1 within the low-temperature geothermal area of the transform zone to 138 ± 15 °Ckm -1 below the centre of the high-temperature geothermal area. Shallow maximum depths of earthquakes and therefore high average geothermal gradients tend to correlate with the intensity of the geothermal area and not with the location of the currently active spreading axis.

  14. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  15. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  16. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  17. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  18. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  19. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  20. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  1. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  2. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  3. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  4. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  5. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  6. Analysis of the correlation between γ-ray and radio emissions from γ-ray loud Blazar using the discrete correlation function

    International Nuclear Information System (INIS)

    Cheng Yong; Zhang Xiong; Wu Lin; Mao Weiming; You Lisha

    2006-01-01

    The authors collect 119 γ-ray-loud Blazar (97 flat spectrum radio quasars (FSRQs) and 22 BL Lacertae objects (BL Lac)), and investigate respectively the correlation between the γ-ray emission (maximum, minimum, and average data) at 1 GeV and the radio emission at 8.4 GHz by discrete correlation function (DCF) method. Our main results are as follows: there is good correlation between the γ-ray in high state and average state and radio emissions for the whole 119 Blazar and 97 FSRQs. And there are no correlation between γ-ray emission and radio emission in low state. Our result shows that the γ-rays are associated with the radio emission from the jet, and that the γ-ray emission is likely to have come from the synchrotron self-compton model (SSC) process in this case. (authors)

  7. The spectrum of R Cygni during its exceptionally low maximum of 1983

    International Nuclear Information System (INIS)

    Wallerstein, G.; Dominy, J.F.; Mattei, J.A.; Smith, V.V.

    1985-01-01

    In 1983 R Cygni experienced its faintest maximum ever recorded. A study of the light curve shows correlations between brightness at maximum and interval from the previous cycle, in the sense that fainter maxima occur later than normal and are followed by maxima that occur earlier than normal. Emission and absorption lines in the optical and near infrared (2.2 μm region) reveal two significant correlations. The amplitude of line doubling is independent of the magnitude at maximum for msub(v)(max)=7.1 to 9.8. The velocities of the emission lines, however, correlate with the magnitude at maximum, in that during bright maxima they are negatively displaced by 15 km s -1 with respect to the red component of absorption lines, while during the faintest maximum there is no displacement. (author)

  8. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Science.gov (United States)

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  9. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Energy Technology Data Exchange (ETDEWEB)

    Borcherdt, R.D.; Gibbs, J.F.

    1975-01-01

    The intensity data for the California earthquake of Apr 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan formation is intensity = 2.69 - 1.90 log (distance) (km). For sites on other geologic units, intensity increments, derived with respect to this empirical relation, correlate strongly with the average horizontal spectral amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is intensity increment = 0.27 + 2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan formation, 0.64 for the Great Valley sequence, 0.82 for Santa Clara formation, 1.34 for alluvium, and 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hayward fault.

  10. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  11. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  12. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  13. Radial behavior of the average local ionization energies of atoms

    International Nuclear Information System (INIS)

    Politzer, P.; Murray, J.S.; Grice, M.E.; Brinck, T.; Ranganathan, S.

    1991-01-01

    The radial behavior of the average local ionization energy bar I(r) has been investigated for the atoms He--Kr, using ab initio Hartree--Fock atomic wave functions. bar I(r) is found to decrease in a stepwise manner with the inflection points serving effectively to define boundaries between electronic shells. There is a good inverse correlation between polarizability and the ionization energy in the outermost region of the atom, suggesting that bar I(r) may be a meaningful measure of local polarizabilities in atoms and molecules

  14. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  15. Glycogen with short average chain length enhances bacterial durability

    Science.gov (United States)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  16. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  17. Relationship Between Selected Strength and Power Assessments to Peak and Average Velocity of the Drive Block in Offensive Line Play.

    Science.gov (United States)

    Jacobson, Bert H; Conchola, Eric C; Smith, Doug B; Akehi, Kazuma; Glass, Rob G

    2016-08-01

    Jacobson, BH, Conchola, EC, Smith, DB, Akehi, K, and Glass, RG. Relationship between selected strength and power assessments to peak and average velocity of the drive block in offensive line play. J Strength Cond Res 30(8): 2202-2205, 2016-Typical strength training for football includes the squat and power clean (PC) and routinely measured variables include 1 repetition maximum (1RM) squat and 1RM PC along with the vertical jump (VJ) for power. However, little research exists regarding the association between the strength exercises and velocity of an actual on-the-field performance. The purpose of this study was to investigate the relationship of peak velocity (PV) and average velocity (AV) of the offensive line drive block to 1RM squat, 1RM PC, the VJ, body mass (BM), and body composition. One repetition maximum assessments for the squat and PC were recorded along with VJ height, BM, and percent body fat. These data were correlated with PV and AV while performing the drive block. Peal velocity and AV were assessed using a Tendo Power and Speed Analyzer as the linemen fired, from a 3-point stance into a stationary blocking dummy. Pearson product analysis yielded significant (p ≤ 0.05) correlations between PV and AV and the VJ, the squat, and the PC. A significant inverse association was found for both PV and AV and body fat. These data help to confirm that the typical exercises recommended for American football linemen is positively associated with both PV and AV needed for the drive block effectiveness. It is recommended that these exercises remain the focus of a weight room protocol and that ancillary exercises be built around these exercises. Additionally, efforts to reduce body fat are recommended.

  18. Phase correlation and clustering of a nearest neighbour coupled oscillators system

    International Nuclear Information System (INIS)

    EI-Nashar, Hassan F.

    2002-09-01

    We investigated the phases in a system of nearest neighbour coupled oscillators before complete synchronization in frequency occurs. We found that when oscillators under the influence of coupling form a cluster of the same time-average frequency, their phases start to correlate. An order parameter, which measures this correlation, starts to grow at this stage until it reaches maximum. This means that a time-average phase locked state is reached between the oscillators inside the cluster of the same time- average frequency. At this strength the cluster attracts individual oscillators or a cluster to join in. We also observe that clustering in averaged frequencies orders the phases of the oscillators. This behavior is found at all the transition points studied. (author)

  19. Phase correlation and clustering of a nearest neighbour coupled oscillators system

    CERN Document Server

    Ei-Nashar, H F

    2002-01-01

    We investigated the phases in a system of nearest neighbour coupled oscillators before complete synchronization in frequency occurs. We found that when oscillators under the influence of coupling form a cluster of the same time-average frequency, their phases start to correlate. An order parameter, which measures this correlation, starts to grow at this stage until it reaches maximum. This means that a time-average phase locked state is reached between the oscillators inside the cluster of the same time- average frequency. At this strength the cluster attracts individual oscillators or a cluster to join in. We also observe that clustering in averaged frequencies orders the phases of the oscillators. This behavior is found at all the transition points studied.

  20. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  1. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  2. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  3. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  4. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  5. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  6. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  7. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  8. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  9. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  10. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  11. Average resonance parameters evaluation for actinides

    Energy Technology Data Exchange (ETDEWEB)

    Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)

    1997-03-01

    New evaluated <{Gamma}{sub n}{sup 0}> and values for {sup 238}U, {sup 237}Np, {sup 243}Cm, {sup 245}Cm, {sup 246}Cm and {sup 241}Am nuclei in the resolved resonance region are presented. The applied method based on the idea that experimental resonance missing results in correlated changes of reduced neutron widths and level spacings distributions is discussed. (author)

  12. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  13. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  14. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  15. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  16. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  17. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  18. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  19. Evaluation of Navigation System Accuracy Indexes for Deviation Reading from Average Range

    Directory of Open Access Journals (Sweden)

    Alexey Boykov

    2017-12-01

    Full Text Available The method for estimating the mean of square error, kurtosis and error correlation coefficient for deviations from the average range of three navigation parameter indications from the outputs of three information sensors is substantiated and developed.

  20. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    Science.gov (United States)

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation

  1. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    Science.gov (United States)

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our

  2. Entanglement in random pure states: spectral density and average von Neumann entropy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)

    2011-11-04

    Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)

  3. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  4. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  5. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  6. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  7. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  8. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  9. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  10. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  11. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  12. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  13. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  14. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  15. A simple maximum power point tracker for thermoelectric generators

    International Nuclear Information System (INIS)

    Paraskevas, Alexandros; Koutroulis, Eftichios

    2016-01-01

    Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.

  16. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  17. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  18. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  19. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  20. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  1. Experimental Warming Decreases the Average Size and Nucleic Acid Content of Marine Bacterial Communities

    KAUST Repository

    Huete-Stauffer, Tamara M.; Arandia-Gorostidi, Nestor; Alonso-Sá ez, Laura; Moran, Xose Anxelu G.

    2016-01-01

    Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.

  2. Experimental Warming Decreases the Average Size and Nucleic Acid Content of Marine Bacterial Communities

    KAUST Repository

    Huete-Stauffer, Tamara M.

    2016-05-23

    Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.

  3. Approximation for maximum pressure calculation in containment of PWR reactors

    International Nuclear Information System (INIS)

    Souza, A.L. de

    1989-01-01

    A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt

  4. On correlations between certain random variables associated with first passage Brownian motion

    International Nuclear Information System (INIS)

    Kearney, Michael J; Pye, Andrew J; Martin, Richard J

    2014-01-01

    We analyse how the area swept out by a Brownian motion up to its first passage time correlates with the first passage time itself, obtaining several exact results in the process. Additionally, we analyse the relationship between the time average of a Brownian motion during a first passage and the maximum value attained. The results, which find various applications, are in excellent agreement with simulations. (paper)

  5. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  6. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  7. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  8. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  9. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  10. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  11. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  12. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  13. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  14. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  15. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  16. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  17. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  18. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  19. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  20. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  1. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  2. Longitudinal Patterns of Employment and Postsecondary Education for Adults with Autism and Average-Range IQ

    Science.gov (United States)

    Taylor, Julie Lounds; Henninger, Natalie A.; Mailick, Marsha R.

    2015-01-01

    This study examined correlates of participation in postsecondary education and employment over 12?years for 73 adults with autism spectrum disorders and average-range IQ whose families were part of a larger, longitudinal study. Correlates included demographic (sex, maternal education, paternal education), behavioral (activities of daily living,…

  3. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  4. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  5. Averaging problem in general relativity, macroscopic gravity and using Einstein's equations in cosmology.

    Science.gov (United States)

    Zalaletdinov, R. M.

    1998-04-01

    The averaging problem in general relativity is briefly discussed. A new setting of the problem as that of macroscopic description of gravitation is proposed. A covariant space-time averaging procedure is described. The structure of the geometry of macroscopic space-time, which follows from averaging Cartan's structure equations, is described and the correlation tensors present in the theory are discussed. The macroscopic field equations (averaged Einstein's equations) derived in the framework of the approach are presented and their structure is analysed. The correspondence principle for macroscopic gravity is formulated and a definition of the stress-energy tensor for the macroscopic gravitational field is proposed. It is shown that the physical meaning of using Einstein's equations with a hydrodynamic stress-energy tensor in looking for cosmological models means neglecting all gravitational field correlations. The system of macroscopic gravity equations to be solved when the correlations are taken into consideration is given and described.

  6. Maximum Aerobic Capacity of Underground Coal Miners in India

    Directory of Open Access Journals (Sweden)

    Ratnadeep Saha

    2011-01-01

    Full Text Available Miners fitness test was assessed in terms of determination of maximum aerobic capacity by an indirect method following a standard step test protocol before going down to mine by taking into consideration of heart rates (Telemetric recording and oxygen consumption of the subjects (Oxylog-II during exercise at different working rates. Maximal heart rate was derived as 220−age. Coal miners reported a maximum aerobic capacity within a range of 35–38.3 mL/kg/min. It also revealed that oldest miners (50–59 yrs had a lowest maximal oxygen uptake (34.2±3.38 mL/kg/min compared to (42.4±2.03 mL/kg/min compared to (42.4±2.03 mL/kg/min the youngest group (20–29 yrs. It was found to be negatively correlated with age (r=−0.55 and −0.33 for younger and older groups respectively and directly associated with the body weight of the subjects (r=0.57 – 0.68, P≤0.001. Carriers showed maximum cardio respiratory capacity compared to other miners. Indian miners VO2max was found to be lower both compared to their abroad mining counterparts and various other non-mining occupational working groups in India.

  7. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  8. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  9. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  10. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  11. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  12. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  13. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  14. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  15. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  16. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  17. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  18. Extractable Work from Correlations

    Directory of Open Access Journals (Sweden)

    Martí Perarnau-Llobet

    2015-10-01

    Full Text Available Work and quantum correlations are two fundamental resources in thermodynamics and quantum information theory. In this work, we study how to use correlations among quantum systems to optimally store work. We analyze this question for isolated quantum ensembles, where the work can be naturally divided into two contributions: a local contribution from each system and a global contribution originating from correlations among systems. We focus on the latter and consider quantum systems that are locally thermal, thus from which any extractable work can only come from correlations. We compute the maximum extractable work for general entangled states, separable states, and states with fixed entropy. Our results show that while entanglement gives an advantage for small quantum ensembles, this gain vanishes for a large number of systems.

  19. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  20. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  1. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  2. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  3. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  4. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  5. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  6. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  7. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  8. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  9. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  10. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  11. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  12. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  13. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  14. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  15. Soil nematodes show a mid-elevation diversity maximum and elevational zonation on Mt. Norikura, Japan.

    Science.gov (United States)

    Dong, Ke; Moroenyane, Itumeleng; Tripathi, Binu; Kerfahi, Dorsaf; Takahashi, Koichi; Yamamoto, Naomichi; An, Choa; Cho, Hyunjun; Adams, Jonathan

    2017-06-08

    Little is known about how nematode ecology differs across elevational gradients. We investigated the soil nematode community along a ~2,200 m elevational range on Mt. Norikura, Japan, by sequencing the 18S rRNA gene. As with many other groups of organisms, nematode diversity showed a high correlation with elevation, and a maximum in mid-elevations. While elevation itself, in the context of the mid domain effect, could predict the observed unimodal pattern of soil nematode communities along the elevational gradient, mean annual temperature and soil total nitrogen concentration were the best predictors of diversity. We also found nematode community composition showed strong elevational zonation, indicating that a high degree of ecological specialization that may exist in nematodes in relation to elevation-related environmental gradients and certain nematode OTUs had ranges extending across all elevations, and these generalized OTUs made up a greater proportion of the community at high elevations - such that high elevation nematode OTUs had broader elevational ranges on average, providing an example consistent to Rapoport's elevational hypothesis. This study reveals the potential for using sequencing methods to investigate elevational gradients of small soil organisms, providing a method for rapid investigation of patterns without specialized knowledge in taxonomic identification.

  16. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  17. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  18. Waveform correlation and coherence of short-period seismic noise within Gauribidanur array with implications for event detection

    International Nuclear Information System (INIS)

    Bhadauria, Y.S.; Arora, S.K.

    1995-01-01

    In continuation with our effort to model the short-period micro seismic noise at the seismic array at Gauribidanur (GBA), we have examined in detail time-correlation and spectral coherence of the noise field within the array space. This has implications of maximum possible improvement in signal-to-noise ratio (SNR) relevant to event detection. The basis of this study is about a hundred representative wide-band noise samples collected from GBA throughout the year 1992. Both time-structured correlation as well as coherence of the noise waveforms are found to be practically independent of the inter element distances within the array, and they exhibit strong temporal and spectral stability. It turns out that the noise is largely incoherent at frequencies ranging upwards from 2 Hz; the coherency coefficient tends to increase in the lower frequency range attaining a maximum of 0.6 close to 0.5 Hz. While the maximum absolute cross-correlation also diminishes with increasing frequency, the zero-lag cross-correlation is found to be insensitive to frequency filtering regardless of the pass band. An extremely small value of -0.01 of the zero-lag correlation and a comparatively higher year-round average estimate at 0.15 of the maximum absolute time-lagged correlation yields an SNR improvement varying between a probable high of 4.1 and a low of 2.3 for the full 20-element array. 19 refs., 6 figs

  19. Maximum entropy networks are more controllable than preferential attachment networks

    International Nuclear Information System (INIS)

    Hou, Lvlin; Small, Michael; Lao, Songyang

    2014-01-01

    A maximum entropy (ME) method to generate typical scale-free networks has been recently introduced. We investigate the controllability of ME networks and Barabási–Albert preferential attachment networks. Our experimental results show that ME networks are significantly more easily controlled than BA networks of the same size and the same degree distribution. Moreover, the control profiles are used to provide insight into control properties of both classes of network. We identify and classify the driver nodes and analyze the connectivity of their neighbors. We find that driver nodes in ME networks have fewer mutual neighbors and that their neighbors have lower average degree. We conclude that the properties of the neighbors of driver node sensitively affect the network controllability. Hence, subtle and important structural differences exist between BA networks and typical scale-free networks of the same degree distribution. - Highlights: • The controllability of maximum entropy (ME) and Barabási–Albert (BA) networks is investigated. • ME networks are significantly more easily controlled than BA networks of the same degree distribution. • The properties of the neighbors of driver node sensitively affect the network controllability. • Subtle and important structural differences exist between BA networks and typical scale-free networks

  20. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  1. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  2. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  3. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  4. Superadditive correlation

    International Nuclear Information System (INIS)

    Giraud, B.G.; Heumann, J.M.; Lapedes, A.S.

    1999-01-01

    The fact that correlation does not imply causation is well known. Correlation between variables at two sites does not imply that the two sites directly interact, because, e.g., correlation between distant sites may be induced by chaining of correlation between a set of intervening, directly interacting sites. Such 'noncausal correlation' is well understood in statistical physics: an example is long-range order in spin systems, where spins which have only short-range direct interactions, e.g., the Ising model, display correlation at a distance. It is less well recognized that such long-range 'noncausal' correlations can in fact be stronger than the magnitude of any causal correlation induced by direct interactions. We call this phenomenon superadditive correlation (SAC). We demonstrate this counterintuitive phenomenon by explicit examples in (i) a model spin system and (ii) a model continuous variable system, where both models are such that two variables have multiple intervening pathways of indirect interaction. We apply the technique known as decimation to explain SAC as an additive, constructive interference phenomenon between the multiple pathways of indirect interaction. We also explain the effect using a definition of the collective mode describing the intervening spin variables. Finally, we show that the SAC effect is mirrored in information theory, and is true for mutual information measures in addition to correlation measures. Generic complex systems typically exhibit multiple pathways of indirect interaction, making SAC a potentially widespread phenomenon. This affects, e.g., attempts to deduce interactions by examination of correlations, as well as, e.g., hierarchical approximation methods for multivariate probability distributions, which introduce parameters based on successive orders of correlation. copyright 1999 The American Physical Society

  5. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  6. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  7. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y. [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France); Banerjee, S. [University of Louisville, Louisville, KY (United States); Ben-Haim, E. [Universite Paris Diderot, CNRS/IN2P3, LPNHE, Universite Pierre et Marie Curie, Paris (France); Bernlochner, F.; Dingfelder, J.; Duell, S. [University of Bonn, Bonn (Germany); Bozek, A. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Bozzi, C. [INFN, Sezione di Ferrara, Ferrara (Italy); Chrzaszcz, M. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Gersabeck, M. [University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom); Gershon, T. [University of Warwick, Department of Physics, Coventry (United Kingdom); Gerstel, D.; Serrano, J. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Goldenzweig, P. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Harr, R. [Wayne State University, Detroit, MI (United States); Hayasaka, K. [Niigata University, Niigata (Japan); Hayashii, H. [Nara Women' s University, Nara (Japan); Kenzie, M. [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); Kuhr, T. [Ludwig-Maximilians-University, Munich (Germany); Leroy, O. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Lusiani, A. [Scuola Normale Superiore, Pisa (Italy); INFN, Sezione di Pisa, Pisa (Italy); Lyu, X.R. [University of Chinese Academy of Sciences, Beijing (China); Miyabayashi, K. [Niigata University, Niigata (Japan); Naik, P. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Nanut, T. [J. Stefan Institute, Ljubljana (Slovenia); Oyanguren Campos, A. [Centro Mixto Universidad de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Patel, M. [Imperial College London, London (United Kingdom); Pedrini, D. [INFN, Sezione di Milano-Bicocca, Milan (Italy); Petric, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Rama, M. [INFN, Sezione di Pisa, Pisa (Italy); Roney, M. [University of Victoria, Victoria, BC (Canada); Rotondo, M. [INFN, Laboratori Nazionali di Frascati, Frascati (Italy); Schneider, O. [Institute of Physics, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne (Switzerland); Schwanda, C. [Institute of High Energy Physics, Vienna (Austria); Schwartz, A.J. [University of Cincinnati, Cincinnati, OH (United States); Shwartz, B. [Budker Institute of Nuclear Physics (SB RAS), Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Tesarek, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); Tonelli, D. [INFN, Sezione di Pisa, Pisa (Italy); Trabelsi, K. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); SOKENDAI (The Graduate University for Advanced Studies), Hayama (Japan); Urquijo, P. [School of Physics, University of Melbourne, Melbourne, VIC (Australia); Van Kooten, R. [Indiana University, Bloomington, IN (United States); Yelton, J. [University of Florida, Gainesville, FL (US); Zupanc, A. [J. Stefan Institute, Ljubljana (SI); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (SI); Collaboration: Heavy Flavor Averaging Group (HFLAV)

    2017-12-15

    This article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays, and Cabbibo-Kobayashi-Maskawa matrix elements. (orig.)

  8. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  9. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  10. Time averaging procedure for calculating the mass and energy transfer rates in adiabatic two phase flow

    International Nuclear Information System (INIS)

    Boccaccini, L.V.

    1986-07-01

    To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.) [de

  11. Forecasting Kp from solar wind data: input parameter study using 3-hour averages and 3-hour range values

    Science.gov (United States)

    Wintoft, Peter; Wik, Magnus; Matzka, Jürgen; Shprits, Yuri

    2017-11-01

    We have developed neural network models that predict Kp from upstream solar wind data. We study the importance of various input parameters, starting with the magnetic component Bz, particle density n, and velocity V and then adding total field B and the By component. As we also notice a seasonal and UT variation in average Kp we include functions of day-of-year and UT. Finally, as Kp is a global representation of the maximum range of geomagnetic variation over 3-hour UT intervals we conclude that sudden changes in the solar wind can have a big effect on Kp, even though it is a 3-hour value. Therefore, 3-hour solar wind averages will not always appropriately represent the solar wind condition, and we introduce 3-hour maxima and minima values to some degree address this problem. We find that introducing total field B and 3-hour maxima and minima, derived from 1-minute solar wind data, have a great influence on the performance. Due to the low number of samples for high Kp values there can be considerable variation in predicted Kp for different networks with similar validation errors. We address this issue by using an ensemble of networks from which we use the median predicted Kp. The models (ensemble of networks) provide prediction lead times in the range 20-90 min given by the time it takes a solar wind structure to travel from L1 to Earth. Two models are implemented that can be run with real time data: (1) IRF-Kp-2017-h3 uses the 3-hour averages of the solar wind data and (2) IRF-Kp-2017 uses in addition to the averages, also the minima and maxima values. The IRF-Kp-2017 model has RMS error of 0.55 and linear correlation of 0.92 based on an independent test set with final Kp covering 2 years using ACE Level 2 data. The IRF-Kp-2017-h3 model has RMSE = 0.63 and correlation = 0.89. We also explore the errors when tested on another two-year period with real-time ACE data which gives RMSE = 0.59 for IRF-Kp-2017 and RMSE = 0.73 for IRF-Kp-2017-h3. The errors as function

  12. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  13. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  14. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  15. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  16. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  17. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  18. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  19. Analysis of photosynthate translocation velocity and measurement of weighted average velocity in transporting pathway of crops

    International Nuclear Information System (INIS)

    Ge Cailin; Luo Shishi; Gong Jian; Zhang Hao; Ma Fei

    1996-08-01

    The translocation profile pattern of 14 C-photosynthate along the transporting pathway in crops were monitored by pulse-labelling a mature leaf with 14 CO 2 . The progressive spreading of translocation profile pattern along the sheath or stem indicates that the translocation of photosynthate along the sheath or stem proceed with a range of velocities rather than with just a single velocity. The method for measuring the weighted average velocity of photosynthate translocation along the sheath or stem was established in living crops. The weighted average velocity and the maximum velocity of photosynthate translocation along the sheath in rice and maize were measured actually. (4 figs., 3 tabs.)

  20. Dinosaur Metabolism and the Allometry of Maximum Growth Rate.

    Science.gov (United States)

    Myhrvold, Nathan P

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued.

  1. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    Science.gov (United States)

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued. PMID:27828977

  2. The ancient Egyptian civilization: maximum and minimum in coincidence with solar activity

    Science.gov (United States)

    Shaltout, M.

    It is proved from the last 22 years observations of the total solar irradiance (TSI) from space by artificial satellites, that TSI shows negative correlation with the solar activity (sunspots, flares, and 10.7cm Radio emissions) from day to day, but shows positive correlations with the same activity from year to year (on the base of the annual average for each of them). Also, the solar constant, which estimated fromth ground stations for beam solar radiations observations during the 20 century indicate coincidence with the phases of the 11- year cycles. It is known from sunspot observations (250 years) , and from C14 analysis, that there are another long-term cycles for the solar activity larger than 11-year cycle. The variability of the total solar irradiance affecting on the climate, and the Nile flooding, where there is a periodicities in the Nile flooding similar to that of solar activity, from the analysis of about 1300 years of the Nile level observations atth Cairo. The secular variations of the Nile levels, regularly measured from the 7 toth 15 century A.D., clearly correlate with the solar variations, which suggests evidence for solar influence on the climatic changes in the East African tropics The civilization of the ancient Egyptian was highly correlated with the Nile flooding , where the river Nile was and still yet, the source of the life in the Valley and Delta inside high dry desert area. The study depends on long -time historical data for Carbon 14 (more than five thousands years), and chronical scanning for all the elements of the ancient Egyptian civilization starting from the firs t dynasty to the twenty six dynasty. The result shows coincidence between the ancient Egyptian civilization and solar activity. For example, the period of pyramids building, which is one of the Brilliant periods, is corresponding to maximum solar activity, where the periods of occupation of Egypt by Foreign Peoples corresponding to minimum solar activity. The decline

  3. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  4. An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation

    Directory of Open Access Journals (Sweden)

    Hao Zhang

    2015-01-01

    Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.

  5. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Factors determining the average body size of geographically separated Arctodiaptomus salinus (Daday, 1885) populations

    OpenAIRE

    Anufriieva, Elena V.; Shadrin, Nickolai V.

    2014-01-01

    Arctodiaptomus salinus inhabits water bodies across Eurasia and North Africa. Based on our own data and that from the literature, we analyzed the influences of several factors on the intra- and inter-population variability of this species. A strong negative linear correlation between temperature and average body size in the Crimean and African populations was found, in which the parameters might be influenced by salinity. Meanwhile, asignificant negative correlation between female body size a...

  7. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  8. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  9. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  10. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  11. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  12. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  13. A METHOD FOR DETERMINING THE RADIALLY-AVERAGED EFFECTIVE IMPACT AREA FOR AN AIRCRAFT CRASH INTO A STRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Walker, William C. [ORNL

    2018-02-01

    This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, the resulting calculated probability of an aircraft crash into a structure is reduced.

  14. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  15. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  16. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  17. Quantum games with correlated noise

    International Nuclear Information System (INIS)

    Nawaz, Ahmad; Toor, A H

    2006-01-01

    We analyse quantum games with correlated noise through a generalized quantization scheme. Four different combinations on the basis of entanglement of initial quantum state and the measurement basis are analysed. It is shown that the quantum player only enjoys an advantage over the classical player when both the initial quantum state and the measurement basis are in entangled form. Furthermore, it is shown that for maximum correlation the effects of decoherence diminish and it behaves as a noiseless game

  18. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  19. Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations

    Science.gov (United States)

    2017-08-21

    number of neurons. Time is discretized and we assume any neuron can spike no more than once in a time bin. We have ν ≤ µ because ν is the probability of a...Comput. Appl. Math . 2000, 121, 331–354. 27. Shalizi, C.; Crutchfield, J. Computational mechanics: Pattern and prediction, structure and simplicity. J...Minimization of a Linearly Constrained Function by Partition of Feasible Domain. Math . Oper. Res. 1983, 8, 215–230. Entropy 2017, 19, 427 33 of 33 54. Candes, E

  20. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  1. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  2. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  3. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  4. Linear intra-bone geometry dependencies of the radius: Radius length determination by maximum distal width

    International Nuclear Information System (INIS)

    Baumbach, S.F.; Krusche-Mandl, I.; Huf, W.; Mall, G.; Fialka, C.

    2012-01-01

    Purpose: The aim of the study was to investigate possible linear intra-bone geometry dependencies by determining the relation between the maximum radius length and maximum distal width in two independent populations and test for possible gender or age effects. A strong correlation can help develop more representative fracture models and osteosynthetic devices as well as aid gender and height estimation in anthropologic/forensic cases. Methods: First, maximum radius length and distal width of 100 consecutive patients, aged 20–70 years, were digitally measured on standard lower arm radiographs by two independent investigators. Second, the same measurements were performed ex vivo on a second cohort, 135 isolated, formalin fixed radii. Standard descriptive statistics as well as correlations were calculated and possible gender age influences tested for both populations separately. Results: The radiographic dataset resulted in a correlation of radius length and width of r = 0.753 (adj. R 2 = 0.563, p 2 = 0.592) and side no influence on the correlation. Radius length–width correlation for the isolated radii was r = 0.621 (adj. R 2 = 0.381, p 2 = 0.598). Conclusion: A relatively strong radius length–distal width correlation was found in two different populations, indicating that linear body proportions might not only apply to body height and axial length measurements of long bones but also to proportional dependency of bone shapes in general.

  5. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  6. ship between IS-month mating mass and average lifetime repro

    African Journals Online (AJOL)

    1976; Elliol, Rae & Wickham, 1979; Napier, et af., 1980). Although being in general agreement with results in the literature, it is evident that the present phenotypic correlations between I8-month mating mass and average lifetime lambing and weaning rate tended to be equal to the highest comparable estimates in the ...

  7. Reformers, Batting Averages, and Malpractice: The Case for Caution in Value-Added Use

    Science.gov (United States)

    Gleason, Daniel

    2014-01-01

    The essay considers two analogies that help to reveal the limitations of value-added modeling: the first, a comparison with batting averages, shows that the model's reliability is quite limited even though year-to-year correlation figures may seem impressive; the second, a comparison between medical malpractice and so-called educational…

  8. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  9. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  10. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  11. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  12. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  13. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  14. Spatial correlations in compressible granular flows

    NARCIS (Netherlands)

    van Noije, T.P.C.; Ernst, M.H.; Brito, R.

    The clustering instability in freely evolving granular fluids manifests itself in the density-density correlation function and structure factor. These functions are calculated from fluctuating hydrodynamics. As time increases, the structure factor of density fluctuations develops a maximum, which

  15. On the design of experimental separation processes for maximum accuracy in the estimation of their parameters

    International Nuclear Information System (INIS)

    Volkman, Y.

    1980-07-01

    The optimal design of experimental separation processes for maximum accuracy in the estimation of process parameters is discussed. The sensitivity factor correlates the inaccuracy of the analytical methods with the inaccuracy of the estimation of the enrichment ratio. It is minimized according to the design parameters of the experiment and the characteristics of the analytical method

  16. A preliminary study to find out maximum occlusal bite force in Indian individuals

    DEFF Research Database (Denmark)

    Jain, Veena; Mathur, Vijay Prakash; Pillai, Rajath

    2014-01-01

    PURPOSE: This preliminary hospital based study was designed to measure the mean maximum bite force (MMBF) in healthy Indian individuals. An attempt was made to correlate MMBF with body mass index (BMI) and some of the anthropometric features. METHODOLOGY: A total of 358 healthy subjects in the ag...

  17. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  18. Average glandular dose in digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.

    2012-01-01

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  19. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  20. An Average Daily Number of Steps Negatively Correlates with an Average Glycemic Value in Type I Diabetic Patients: Comparison between CGM and Pedometer Records

    Czech Academy of Sciences Publication Activity Database

    Brož, J.; Holubová, A.; Mužík, J.; Oulická, M.; Mužný, M.; Poláček, M.; Fiala, D.; Arsand, E.; Brabec, Marek; Kvapil, M.

    2016-01-01

    Roč. 18, Suppl. 1 (2016), A70-A70 ISSN 1520-9156. [ATTD 2016. International Conference on Advanced Technologies & Treatments for Diabetes /9./. 03.02.2016-06.02.2016, Milan] Institutional support: RVO:67985807 Subject RIV: BB - Applied Statistics, Operational Research

  1. Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer

    Science.gov (United States)

    Lee, Jae Nyung

    2008-10-01

    Statistically significant climate responses to the solar variability are found in Northern Annular Mode (NAM) and in the tropical circulation. This study is based on the statistical analysis of numerical simulations with ModelE version of the chemistry coupled Goddard Institute for Space Studies (GISS) general circulation model (GCM) and National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The low frequency large scale variability of the winter and summer circulation is described by the NAM, the leading Empirical Orthogonal Function (EOF) of geopotential heights. The newly defined seasonal annular modes and its dynamical significance in the stratosphere and troposphere in the GISS ModelE is shown and compared with those in the NCEP/NCAR reanalysis. In the stratosphere, the summer NAM obtained from NCEP/NCAR reanalysis as well as from the ModelE simulations has the same sign throughout the northern hemisphere, but shows greater variability at low latitudes. The patterns in both analyses are consistent with the interpretation that low NAM conditions represent an enhancement of the seasonal difference between the summer and the annual averages of geopotential height, temperature and velocity distributions, while the reverse holds for high NAM conditions. Composite analysis of high and low NAM cases in both the model and observation suggests that the summer stratosphere is more "summer-like" when the solar activity is near a maximum. This means that the zonal easterly wind flow is stronger and the temperature is higher than normal. Thus increased irradiance favors a low summer NAM. A quantitative comparison of the anti-correlation between the NAM and the solar forcing is presented in the model and in the observation, both of which show lower/higher NAM index in solar maximum/minimum conditions. The summer NAM in the troposphere obtained from NCEP/NCAR reanalysis has a dipolar zonal structure with maximum

  2. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  3. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  4. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  5. Feedback Limits to Maximum Seed Masses of Black Holes

    International Nuclear Information System (INIS)

    Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea

    2017-01-01

    The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  6. Identification and estimation of survivor average causal effects.

    Science.gov (United States)

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  7. Comparison of helical, maximum intensity projection (MIP), and averaged intensity (AI) 4D CT imaging for stereotactic body radiation therapy (SBRT) planning in lung cancer

    International Nuclear Information System (INIS)

    Bradley, Jeffrey D.; Nofal, Ahmed N.; El Naqa, Issam M.; Lu, Wei; Liu, Jubei; Hubenschmidt, James; Low, Daniel A.; Drzymala, Robert E.; Khullar, Divya

    2006-01-01

    Background and Purpose: To compare helical, MIP and AI 4D CT imaging, for the purpose of determining the best CT-based volume definition method for encompassing the mobile gross tumor volume (mGTV) within the planning target volume (PTV) for stereotactic body radiation therapy (SBRT) in stage I lung cancer. Materials and methods: Twenty patients with medically inoperable peripheral stage I lung cancer were planned for SBRT. Free-breathing helical and 4D image datasets were obtained for each patient. Two composite images, the MIP and AI, were automatically generated from the 4D image datasets. The mGTV contours were delineated for the MIP, AI and helical image datasets for each patient. The volume for each was calculated and compared using analysis of variance and the Wilcoxon rank test. A spatial analysis for comparing center of mass (COM) (i.e. isocenter) coordinates for each imaging method was also performed using multivariate analysis of variance. Results: The MIP-defined mGTVs were significantly larger than both the helical- (p 0.001) and AI-defined mGTVs (p = 0.012). A comparison of COM coordinates demonstrated no significant spatial difference in the x-, y-, and z-coordinates for each tumor as determined by helical, MIP, or AI imaging methods. Conclusions: In order to incorporate the extent of tumor motion from breathing during SBRT, MIP is superior to either helical or AI images for defining the mGTV. The spatial isocenter coordinates for each tumor were not altered significantly by the imaging methods

  8. Surface temperature evolution and the location of maximum and average surface temperature of a lithium-ion pouch cell under variable load profiles

    DEFF Research Database (Denmark)

    Goutam, Shovon; Timmermans, Jean-Marc; Omar, Noshin

    2014-01-01

    This experimental work attempts to determine the surface temperature evolution of large (20 Ah-rated capacity) commercial Lithium-Ion pouch cells for the application of rechargeable energy storage of plug in hybrid electric vehicles and electric vehicles. The cathode of the cells is nickel...

  9. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  10. Gravitational wave chirp search: no-signal cumulative distribution of the maximum likelihood detection statistic

    International Nuclear Information System (INIS)

    Croce, R P; Demma, Th; Longo, M; Marano, S; Matta, V; Pierro, V; Pinto, I M

    2003-01-01

    The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality

  11. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  12. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  13. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  14. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  15. Mapping Comparison and Meteorological Correlation Analysis of the Air Quality Index in Mid-Eastern China

    Directory of Open Access Journals (Sweden)

    Zhichen Yu

    2017-02-01

    Full Text Available With the continuous progress of human production and life, air quality has become the focus of attention. In this paper, Beijing, Tianjin, Hebei, Shanxi, Shandong and Henan provinces were taken as the study area, where there are 58 air quality monitoring stations from which daily and monthly data are obtained. Firstly, the temporal characteristics of the air quality index (AQI are explored. Then, the spatial distribution of the AQI is mapped by the inverse distance weighted (IDW method, the ordinary kriging (OK method and the Bayesian maximum entropy (BME method. Additionally, cross-validation is utilized to evaluate the mapping results of these methods with two indexes: mean absolute error and root mean square interpolation error. Furthermore, the correlation analysis of meteorological factors, including precipitation anomaly percentage, precipitation, mean wind speed, average temperature, average water vapor pressure and average relative humidity, potentially affecting the AQI was carried out on both daily and monthly scales. In the study area and period, AQI shows a clear periodicity, although overall, it has a downward trend. The peak of AQI appeared in November, December and January. BME interpolation has a higher accuracy than OK. IDW has the maximum error. Overall, the AQI of winter (November, spring (February is much worse than summer (May and autumn (August. Additionally, the air quality has improved during the study period. The most polluted areas of air quality are concentrated in Beijing, the southern part of Tianjin, the central-southern part of Hebei, the central-northern part of Henan and the western part of Shandong. The average wind speed and average relative humidity have real correlation with AQI. The effect of meteorological factors such as wind, precipitation and humidity on AQI is putative to have temporal lag to different extents. AQI of cities with poor air quality will fluctuate greater than that of others when weather

  16. Average and local structure of selected metal deuterides

    Energy Technology Data Exchange (ETDEWEB)

    Soerby, Magnus H.

    2005-07-01

    deuterides at 1 bar D2 and elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4

  17. Average and local structure of selected metal deuterides

    International Nuclear Information System (INIS)

    Soerby, Magnus H.

    2004-01-01

    elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4 at ambient and low

  18. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  19. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  20. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  1. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  2. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  3. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  4. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  5. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  6. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  7. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  8. A New MPPT Control for Photovoltaic Panels by Instantaneous Maximum Power Point Tracking

    Science.gov (United States)

    Tokushima, Daiki; Uchida, Masato; Kanbei, Satoshi; Ishikawa, Hiroki; Naitoh, Haruo

    This paper presents a new maximum power point tracking control for photovoltaic (PV) panels. The control can be categorized into the Perturb and Observe (P & O) method. It utilizes instantaneous voltage ripples at PV panel output terminals caused by the switching of a chopper connected to the panel in order to identify the direction for the maximum power point (MPP). The tracking for the MPP is achieved by a feedback control of the average terminal voltage of the panel. Appropriate use of the instantaneous and the average values of the PV voltage for the separate purposes enables both the quick transient response and the good convergence with almost no ripples simultaneously. The tracking capability is verified experimentally with a 2.8 W PV panel under a controlled experimental setup. A numerical comparison with a conventional P & O confirms that the proposed control extracts much more power from the PV panel.

  9. Application of the maximum entropy method to dynamical fermion simulations

    Science.gov (United States)

    Clowser, Jonathan

    This thesis presents results for spectral functions extracted from imaginary-time correlation functions obtained from Monte Carlo simulations using the Maximum Entropy Method (MEM). The advantages this method are (i) no a priori assumptions or parametrisations of the spectral function are needed, (ii) a unique solution exists and (iii) the statistical significance of the resulting image can be quantitatively analysed. The Gross Neveu model in d = 3 spacetime dimensions (GNM3) is a particularly interesting model to study with the MEM because at T = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances. Results for the elementary fermion, the Goldstone boson (pion), the sigma, the massive pseudoscalar meson and the symmetric phase resonances are presented. UKQCD Nf = 2 dynamical QCD data is also studied with MEM. Results are compared to those found from the quenched approximation, where the effects of quark loops in the QCD vacuum are neglected, to search for sea-quark effects in the extracted spectral functions. Information has been extract from the difficult axial spatial and scalar as well as the pseudoscalar, vector and axial temporal channels. An estimate for the non-singlet scalar mass in the chiral limit is given which is in agreement with the experimental value of Mao = 985 MeV.

  10. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  11. Measured emotional intelligence ability and grade point average in nursing students.

    Science.gov (United States)

    Codier, Estelle; Odell, Ellen

    2014-04-01

    For most schools of nursing, grade point average is the most important criteria for admission to nursing school and constitutes the main indicator of success throughout the nursing program. In the general research literature, the relationship between traditional measures of academic success, such as grade point average and postgraduation job performance is not well established. In both the general population and among practicing nurses, measured emotional intelligence ability correlates with both performance and other important professional indicators postgraduation. Little research exists comparing traditional measures of intelligence with measured emotional intelligence prior to graduation, and none in the student nurse population. This exploratory, descriptive, quantitative study was undertaken to explore the relationship between measured emotional intelligence ability and grade point average of first year nursing students. The study took place at a school of nursing at a university in the south central region of the United States. Participants included 72 undergraduate student nurse volunteers. Emotional intelligence was measured using the Mayer-Salovey-Caruso Emotional Intelligence Test, version 2, an instrument for quantifying emotional intelligence ability. Pre-admission grade point average was reported by the school records department. Total emotional intelligence (r=.24) scores and one subscore, experiential emotional intelligence(r=.25) correlated significantly (>.05) with grade point average. This exploratory, descriptive study provided evidence for some relationship between GPA and measured emotional intelligence ability, but also demonstrated lower than average range scores in several emotional intelligence scores. The relationship between pre-graduation measures of success and level of performance postgraduation deserves further exploration. The findings of this study suggest that research on the relationship between traditional and nontraditional

  12. Variation of Probable Maximum Precipitation in Brazos River Basin, TX

    Science.gov (United States)

    Bhatia, N.; Singh, V. P.

    2017-12-01

    The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was

  13. Maximum Power Point Tracking (MPPT Pada Sistem Pembangkit Listrik Tenaga Angin Menggunakan Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    Muhamad Otong

    2017-05-01

    Full Text Available In this paper, the implementation of the Maximum Power Point Tracking (MPPT technique is developed using buck-boost converter. Perturb and observe (P&O MPPT algorithm is used to searching maximum power from the wind power plant for charging of the battery. The model used in this study is the Variable Speed Wind Turbine (VSWT with a Permanent Magnet Synchronous Generator (PMSG. Analysis, design, and modeling of wind energy conversion system has done using MATLAB/simulink. The simulation results show that the proposed MPPT produce a higher output power than the system without MPPT. The average efficiency that can be achieved by the proposed system to transfer the maximum power into battery is 90.56%.

  14. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  15. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  16. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  17. Correlation spectrometer

    Science.gov (United States)

    Sinclair, Michael B [Albuquerque, NM; Pfeifer, Kent B [Los Lunas, NM; Flemming, Jeb H [Albuquerque, NM; Jones, Gary D [Tijeras, NM; Tigges, Chris P [Albuquerque, NM

    2010-04-13

    A correlation spectrometer can detect a large number of gaseous compounds, or chemical species, with a species-specific mask wheel. In this mode, the spectrometer is optimized for the direct measurement of individual target compounds. Additionally, the spectrometer can measure the transmission spectrum from a given sample of gas. In this mode, infrared light is passed through a gas sample and the infrared transmission signature of the gasses present is recorded and measured using Hadamard encoding techniques. The spectrometer can detect the transmission or emission spectra in any system where multiple species are present in a generally known volume.

  18. Proton transport properties of poly(aspartic acid) with different average molecular weights

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)

    2011-04-15

    Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.

  19. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  20. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is