WorldWideScience

Sample records for maximum sampling rate

  1. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  3. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  5. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  6. Releasable activity and maximum permissible leakage rate within a transport cask of Tehran Research Reactor fuel samples

    Directory of Open Access Journals (Sweden)

    Rezaeian Mahdi

    2015-01-01

    Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.

  7. [The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].

    Science.gov (United States)

    Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R

    1996-02-01

    To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.

  8. Exercise-induced maximum metabolic rate scaled to body mass by ...

    African Journals Online (AJOL)

    Exercise-induced maximum metabolic rate scaled to body mass by the fractal ... rate scaling is that exercise-induced maximum aerobic metabolic rate (MMR) is ... muscle stress limitation, and maximized oxygen delivery and metabolic rates.

  9. 5 CFR 531.221 - Maximum payable rate rule.

    Science.gov (United States)

    2010-01-01

    ... before the reassignment. (ii) If the rate resulting from the geographic conversion under paragraph (c)(2... previous rate (i.e., the former special rate after the geographic conversion) with the rates on the current... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum payable rate rule. 531.221...

  10. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    OpenAIRE

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...

  11. 44 CFR 208.12 - Maximum Pay Rate Table.

    Science.gov (United States)

    2010-10-01

    ...) Physicians. DHS uses the latest Special Salary Rate Table Number 0290 for Medical Officers (Clinical... Personnel, in which case the Maximum Pay Rate Table would not apply. (3) Compensation for Sponsoring Agency... organizations, e.g., HMOs or medical or engineering professional associations, under the revised definition of...

  12. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  13. Dinosaur Metabolism and the Allometry of Maximum Growth Rate.

    Science.gov (United States)

    Myhrvold, Nathan P

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued.

  14. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    Science.gov (United States)

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued. PMID:27828977

  15. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  16. 47 CFR 1.1507 - Rulemaking on maximum rates for attorney fees.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Rulemaking on maximum rates for attorney fees... § 1.1507 Rulemaking on maximum rates for attorney fees. (a) If warranted by an increase in the cost of... types of proceedings), the Commission may adopt regulations providing that attorney fees may be awarded...

  17. Allometries of Maximum Growth Rate versus Body Mass at Maximum Growth Indicate That Non-Avian Dinosaurs Had Growth Rates Typical of Fast Growing Ectothermic Sauropsids

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case’s study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either

  18. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of

  19. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Directory of Open Access Journals (Sweden)

    Jan Werner

    Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule

  20. 5 CFR 9901.312 - Maximum rates of base salary and adjusted salary.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Maximum rates of base salary and adjusted salary. 9901.312 Section 9901.312 Administrative Personnel DEPARTMENT OF DEFENSE HUMAN RESOURCES....312 Maximum rates of base salary and adjusted salary. (a) Subject to § 9901.105, the Secretary may...

  1. The scaling of maximum and basal metabolic rates of mammals and birds

    Science.gov (United States)

    Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.

    2006-01-01

    Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.

  2. Rate maximum calculation of Dpa in CNA-II pressure vessel

    International Nuclear Information System (INIS)

    Mascitti, J. A

    2012-01-01

    The maximum dpa rate was calculated for the reactor in the following state: fresh fuel, no Xenon, a Boron concentration of 15.3 ppm, critical state, its control rods in the criticality position, hot, at full power (2160 MW). It was determined that the maximum dpa rate under such conditions is 3.54(2)x10 12 s -1 and it is located in the positions corresponding to θ=210 o in the azimuthal direction, and z=20 cm and -60 cm respectively in the axial direction, considering the calculation mesh centered at half height of the fuel element (FE) active length. The dpa rate spectrum was determined as well as the contribution to it for 4 energy groups: a thermal group, two epithermal groups and a fast one. The maximum dpa rate considering the photo-neutrons production from (γ, n) reaction in the heavy water of coolant and moderator was 3.93(4)x10 12 s -1 that is 11% greater than the obtained without photo-neutrons. This verified significant difference between both cases, suggest that photo-neutrons in large heavy water reactors such as CNA-II should not be ignored. The maximum DPA rate in the first mm of the reactor pressure vessel was calculated too and it was obtained a value of 4.22(6)x10 12 s -1 . It should be added that the calculation was carried out with the reactor complete accurate model, with no approximations in spatial or energy variables. Each value has, between parentheses, a percentage relative error representing the statistical uncertainty due to the probabilistic Monte Carlo method used to estimate it. More representative values may be obtained with this method if equilibrium burn-up distribution is used (author)

  3. Exercise-induced maximum metabolic rate scaled to body mass by ...

    African Journals Online (AJOL)

    user

    2016-10-27

    Oct 27, 2016 ... maximum aerobic metabolic rate (MMR) is proportional to the fractal extent ... metabolic rate with body mass can be obtained by taking body .... blood takes place. ..... MMR and BMR is that MMR is owing mainly to respiration in skeletal .... the spectra of surface area scaling strategies of cells and organisms:.

  4. Maximum production rate optimization for sulphuric acid decomposition process in tubular plug-flow reactor

    International Nuclear Information System (INIS)

    Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui

    2016-01-01

    A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.

  5. Maximum discharge rate of liquid-vapor mixtures from vessels

    International Nuclear Information System (INIS)

    Moody, F.J.

    1975-09-01

    A discrepancy exists in theoretical predictions of the two-phase equilibrium discharge rate from pipes attached to vessels. Theory which predicts critical flow data in terms of pipe exit pressure and quality severely overpredicts flow rates in terms of vessel fluid properties. This study shows that the discrepancy is explained by the flow pattern. Due to decompression and flashing as fluid accelerates into the pipe entrance, the maximum discharge rate from a vessel is limited by choking of a homogeneous bubbly mixture. The mixture tends toward a slip flow pattern as it travels through the pipe, finally reaching a different choked condition at the pipe exit

  6. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  7. On the equivalence between the minimum entropy generation rate and the maximum conversion rate for a reactive system

    International Nuclear Information System (INIS)

    Bispo, Heleno; Silva, Nilton; Brito, Romildo; Manzi, João

    2013-01-01

    Highlights: • Minimum entropy generation (MEG) principle improved the reaction performance. • MEG rate and the maximum conversion equivalence have been analyzed. • Temperature and residence time are used to the domain establishment of MEG. • Satisfying the temperature and residence time relationship results a optimal performance. - Abstract: The analysis of the equivalence between the minimum entropy generation (MEG) rate and the maximum conversion rate for a reactive system is the main purpose of this paper. While being used as a strategy of optimization, the minimum entropy production was applied to the production of propylene glycol in a Continuous Stirred-Tank Reactor (CSTR) with a view to determining the best operating conditions, and under such conditions, a high conversion rate was found. The effects of the key variables and restrictions on the validity domain of MEG were investigated, which raises issues that are included within a broad discussion. The results from simulations indicate that from the chemical reaction standpoint a maximum conversion rate can be considered as equivalent to MEG. Such a result can be clearly explained by examining the classical Maxwell–Boltzmann distribution, where the molecules of the reactive system under the condition of the MEG rate present a distribution of energy with reduced dispersion resulting in a better quality of collision between molecules with a higher conversion rate

  8. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  9. Fumigant dosages below maximum label rate control some soilborne pathogens

    Directory of Open Access Journals (Sweden)

    Shachaf Triky-Dotan

    2016-08-01

    Full Text Available The activity of commercial soil fumigants on some key soilborne pathogens was assessed in sandy loam soil under controlled conditions. Seven soil fumigants that are registered in California or are being or have been considered for registration were used in this study: dimethyl disulfide (DMDS mixed with chloropicrin (Pic (79% DMDS and 21% Pic, Tri-Con (50% methyl bromide and 50% Pic, Midas Gold (33% methyl iodide [MI] and 67% Pic, Midas Bronze (50% MI and 50% Pic, Midas (MI, active ingredient [a.i.] 97.8%, Pic (a.i. 99% trichloronitromethane and Pic-Clor 60 (57% Pic and 37% 1,3-dichloropropene [1–3,D]. Dose-response models were calculated for pathogen mortality after 24 hours of exposure to fumigants. Overall, the tested fumigants achieved good efficacy with dosages below the maximum label rate against the tested pathogens. In this study, Pythium ultimum and citrus nematode were sensitive to all the fumigants and Verticillium dahliae was resistant. For most fumigants, California regulations restrict application rates to less than the maximum (federal label rate, meaning that it is possible that the fumigants may not control major plant pathogens. This research provides information on the effectiveness of these alternatives at these lower application rates. The results from this study will help growers optimize application rates for registered fumigants (such as Pic and 1,3-D and will help accelerate the adoption of new fumigants (such as DMDS if they are registered in California.

  10. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  11. 13 CFR 107.845 - Maximum rate of amortization on Loans and Debt Securities.

    Science.gov (United States)

    2010-01-01

    ... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.845 Maximum... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum rate of amortization on...

  12. Methodological aspects of crossover and maximum fat-oxidation rate point determination.

    Science.gov (United States)

    Michallet, A-S; Tonini, J; Regnier, J; Guinot, M; Favre-Juvin, A; Bricout, V; Halimi, S; Wuyam, B; Flore, P

    2008-11-01

    Indirect calorimetry during exercise provides two metabolic indices of substrate oxidation balance: the crossover point (COP) and maximum fat oxidation rate (LIPOXmax). We aimed to study the effects of the analytical device, protocol type and ventilatory response on variability of these indices, and the relationship with lactate and ventilation thresholds. After maximum exercise testing, 14 relatively fit subjects (aged 32+/-10 years; nine men, five women) performed three submaximum graded tests: one was based on a theoretical maximum power (tMAP) reference; and two were based on the true maximum aerobic power (MAP). Gas exchange was measured concomitantly using a Douglas bag (D) and an ergospirometer (E). All metabolic indices were interpretable only when obtained by the D reference method and MAP protocol. Bland and Altman analysis showed overestimation of both indices with E versus D. Despite no mean differences between COP and LIPOXmax whether tMAP or MAP was used, the individual data clearly showed disagreement between the two protocols. Ventilation explained 10-16% of the metabolic index variations. COP was correlated with ventilation (r=0.96, P<0.01) and the rate of increase in blood lactate (r=0.79, P<0.01), and LIPOXmax correlated with the ventilation threshold (r=0.95, P<0.01). This study shows that, in fit healthy subjects, the analytical device, reference used to build the protocol and ventilation responses affect metabolic indices. In this population, and particularly to obtain interpretable metabolic indices, we recommend a protocol based on the true MAP or one adapted to include the transition from fat to carbohydrate. The correlation between metabolic indices and lactate/ventilation thresholds suggests that shorter, classical maximum progressive exercise testing may be an alternative means of estimating these indices in relatively fit subjects. However, this needs to be confirmed in patients who have metabolic defects.

  13. Disentangling the effects of alternation rate and maximum run length on judgments of randomness

    Directory of Open Access Journals (Sweden)

    Sabine G. Scholl

    2011-08-01

    Full Text Available Binary sequences are characterized by various features. Two of these characteristics---alternation rate and run length---have repeatedly been shown to influence judgments of randomness. The two characteristics, however, have usually been investigated separately, without controlling for the other feature. Because the two features are correlated but not identical, it seems critical to analyze their unique impact, as well as their interaction, so as to understand more clearly what influences judgments of randomness. To this end, two experiments on the perception of binary sequences orthogonally manipulated alternation rate and maximum run length (i.e., length of the longest run within the sequence. Results show that alternation rate consistently exerts a unique effect on judgments of randomness, but that the effect of alternation rate is contingent on the length of the longest run within the sequence. The effect of maximum run length was found to be small and less consistent. Together, these findings extend prior randomness research by integrating literature from the realms of perception, categorization, and prediction, as well as by showing the unique and joint effects of alternation rate and maximum run length on judgments of randomness.

  14. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  15. 19 CFR 212.07 - Rulemaking on maximum rates for attorney fees.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Rulemaking on maximum rates for attorney fees. 212.07 Section 212.07 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE IMPLEMENTATION OF THE EQUAL ACCESS TO JUSTICE ACT General Provisions...

  16. Low reproducibility of maximum urinary flow rate determined by portable flowmetry

    NARCIS (Netherlands)

    Sonke, G. S.; Kiemeney, L. A.; Verbeek, A. L.; Kortmann, B. B.; Debruyne, F. M.; de la Rosette, J. J.

    1999-01-01

    To evaluate the reproducibility in maximum urinary flow rate (Qmax) in men with lower urinary tract symptoms (LUTSs) and to determine the number of flows needed to obtain a specified reliability in mean Qmax, 212 patients with LUTSs (mean age, 62 years) referred to the University Hospital Nijmegen,

  17. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  18. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Conifers in cold environments synchronize maximum growth rate of tree-ring formation with day length.

    Science.gov (United States)

    Rossi, Sergio; Deslauriers, Annie; Anfodillo, Tommaso; Morin, Hubert; Saracino, Antonio; Motta, Renzo; Borghetti, Marco

    2006-01-01

    Intra-annual radial growth rates and durations in trees are reported to differ greatly in relation to species, site and environmental conditions. However, very similar dynamics of cambial activity and wood formation are observed in temperate and boreal zones. Here, we compared weekly xylem cell production and variation in stem circumference in the main northern hemisphere conifer species (genera Picea, Pinus, Abies and Larix) from 1996 to 2003. Dynamics of radial growth were modeled with a Gompertz function, defining the upper asymptote (A), x-axis placement (beta) and rate of change (kappa). A strong linear relationship was found between the constants beta and kappa for both types of analysis. The slope of the linear regression, which corresponds to the time at which maximum growth rate occurred, appeared to converge towards the summer solstice. The maximum growth rate occurred around the time of maximum day length, and not during the warmest period of the year as previously suggested. The achievements of photoperiod could act as a growth constraint or a limit after which the rate of tree-ring formation tends to decrease, thus allowing plants to safely complete secondary cell wall lignification before winter.

  20. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  1. Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians

    DEFF Research Database (Denmark)

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.

    2014-01-01

    and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...... for which larger vibrato excursions were favored. The observed interaction between maximum excursion thresholds and vibrato rate may be due to the listeners’ judgments relying on cues provided by the rate of frequency changes (RFC) rather than excursion per se. Further studies are needed to evaluate......Human vibrato is mainly characterized by two parameters: vibrato extent and vibrato rate. These parameters have been found to exhibit an interaction both in physical recordings of singers’ voices and in listener’s preference ratings. This study was concerned with the way in which the maximum...

  2. LASER: A Maximum Likelihood Toolkit for Detecting Temporal Shifts in Diversification Rates From Molecular Phylogenies

    Directory of Open Access Journals (Sweden)

    Daniel L. Rabosky

    2006-01-01

    Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.

  3. Sub-symbol-rate sampling for PDM-QPSK signals in super-Nyquist WDM systems using quadrature poly-binary shaping.

    Science.gov (United States)

    Xu, Cheng; Gao, Guanjun; Chen, Sai; Zhang, Jie; Luo, Ming; Hu, Rong; Yang, Qi

    2016-11-14

    We compare the performance of sub-symbol-rate sampling for polarization-division-multiplexed quadrature-phase-shift-keying (PDM-QPSK) signals in super-Nyquist wavelength division multiplexing (WDM) system by using quadrature duo-binary (QDB) and quadrature four-level poly-binary (4PB) shaping together with maximum likelihood sequence estimation (MLSE). PDM-16QAM is adopted in the simulation to be compared with PDM-QPSK. The numerical simulations show that, for a software defined communication system, the level number of quadrature poly-binary modulation should be adjusted to achieve the optimal performance according to channel spacing, required OSNR and sampling rate restrictions of optics. In the experiment, we demonstrate 3-channel 12-Gbaud PDM-QPSK transmission with 10-GHz channel spacing and only 8.4-GSa/s ADC sampling rate at lowest. By using QDB or 4PB shaping with 3tap-MLSE, the sampling rate can be reduced to the signal baud rate (1 samples per symbol) without penalty.

  4. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  5. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  6. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  7. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  8. Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael

    2016-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.

  9. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  10. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  11. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  12. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  13. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  14. 7 CFR 4290.845 - Maximum rate of amortization on Loans and Debt Securities.

    Science.gov (United States)

    2010-01-01

    ...) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL BUSINESS INVESTMENT COMPANY (âRBICâ) PROGRAM Financing of Enterprises by RBICs Structuring Rbic Financing of Eligible Enterprises-Types of Financings § 4290.845 Maximum rate of amortization on Loans and Debt Securities. The...

  15. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  16. New England observed and predicted August stream/river temperature maximum daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted August stream/river temperature maximum negative rate of change in New England based on a...

  17. Analysis of reaction schemes using maximum rates of constituent steps

    Science.gov (United States)

    Motagamwala, Ali Hussain; Dumesic, James A.

    2016-01-01

    We show that the steady-state kinetics of a chemical reaction can be analyzed analytically in terms of proposed reaction schemes composed of series of steps with stoichiometric numbers equal to unity by calculating the maximum rates of the constituent steps, rmax,i, assuming that all of the remaining steps are quasi-equilibrated. Analytical expressions can be derived in terms of rmax,i to calculate degrees of rate control for each step to determine the extent to which each step controls the rate of the overall stoichiometric reaction. The values of rmax,i can be used to predict the rate of the overall stoichiometric reaction, making it possible to estimate the observed reaction kinetics. This approach can be used for catalytic reactions to identify transition states and adsorbed species that are important in controlling catalyst performance, such that detailed calculations using electronic structure calculations (e.g., density functional theory) can be carried out for these species, whereas more approximate methods (e.g., scaling relations) are used for the remaining species. This approach to assess the feasibility of proposed reaction schemes is exact for reaction schemes where the stoichiometric coefficients of the constituent steps are equal to unity and the most abundant adsorbed species are in quasi-equilibrium with the gas phase and can be used in an approximate manner to probe the performance of more general reaction schemes, followed by more detailed analyses using full microkinetic models to determine the surface coverages by adsorbed species and the degrees of rate control of the elementary steps. PMID:27162366

  18. Estimation of the players maximum heart rate in real game situations in team sports: a practical propose

    Directory of Open Access Journals (Sweden)

    Jorge Cuadrado Reyes

    2011-05-01

    Full Text Available Abstract   This  research developed a logarithms  for calculating the maximum heart rate (max. HR for players in team sports in  game situations. The sample was made of  thirteen players (aged 24 ± 3   to a  Division Two Handball team. HR was initially measured by Course Navette test.  Later, twenty one training sessions were conducted  in which HR and Rate of Perceived Exertion (RPE, were  continuously monitored, in each task. A lineal regression analysis was done  to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from  this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing..   Key words: workout control, functional evaluation, prediction equation.

  19. AREA EFFICIENT FRACTIONAL SAMPLE RATE CONVERSION ARCHITECTURE FOR SOFTWARE DEFINED RADIOS

    Directory of Open Access Journals (Sweden)

    Latha Sahukar

    2014-09-01

    Full Text Available The modern software defined radios (SDRs use complex signal processing algorithms to realize efficient wireless communication schemes. Several such algorithms require a specific symbol to sample ratio to be maintained. In this context the fractional rate converter (FRC becomes a crucial block in the receiver part of SDR. The paper presents an area optimized dynamic FRC block, for low power SDR applications. The limitations of conventional cascaded interpolator and decimator architecture for FRC are also presented. Extending the SINC function interpolation based architecture; towards high area optimization and providing run time configuration with time register are presented. The area and speed analysis are carried with Xilinx FPGA synthesis tools. Only 15% area occupancy with maximum clock speed of 133 MHz are reported on Spartan-6 Lx45 Field Programmable Gate Array (FPGA.

  20. New England observed and predicted August stream/river temperature maximum positive daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted August stream/river temperature maximum positive daily rate of change in New England based on a...

  1. New England observed and predicted July stream/river temperature maximum positive daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted July stream/river temperature maximum positive daily rate of change in New England based on a...

  2. New England observed and predicted July maximum negative stream/river temperature daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted July stream/river temperature maximum negative daily rate of change in New England based on a...

  3. 30 CFR 75.601-3 - Short circuit protection; dual element fuses; current ratings; maximum values.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Short circuit protection; dual element fuses... Trailing Cables § 75.601-3 Short circuit protection; dual element fuses; current ratings; maximum values. Dual element fuses having adequate current-interrupting capacity shall meet the requirements for short...

  4. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  5. Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan

    2016-07-01

    Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return

  6. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    Science.gov (United States)

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  7. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  8. Emigration Rates From Sample Surveys: An Application to Senegal.

    Science.gov (United States)

    Willekens, Frans; Zinn, Sabine; Leuchter, Matthias

    2017-12-01

    What is the emigration rate of a country, and how reliable is that figure? Answering these questions is not at all straightforward. Most data on international migration are census data on foreign-born population. These migrant stock data describe the immigrant population in destination countries but offer limited information on the rate at which people leave their country of origin. The emigration rate depends on the number leaving in a given period and the population at risk of leaving, weighted by the duration at risk. Emigration surveys provide a useful data source for estimating emigration rates, provided that the estimation method accounts for sample design. In this study, emigration rates and confidence intervals are estimated from a sample survey of households in the Dakar region in Senegal, which was part of the Migration between Africa and Europe survey. The sample was a stratified two-stage sample with oversampling of households with members abroad or return migrants. A combination of methods of survival analysis (time-to-event data) and replication variance estimation (bootstrapping) yields emigration rates and design-consistent confidence intervals that are representative for the study population.

  9. Maximum heart rate in brown trout (Salmo trutta fario) is not limited by firing rate of pacemaker cells.

    Science.gov (United States)

    Haverinen, Jaakko; Abramochkin, Denis V; Kamkin, Andre; Vornanen, Matti

    2017-02-01

    Temperature-induced changes in cardiac output (Q̇) in fish are largely dependent on thermal modulation of heart rate (f H ), and at high temperatures Q̇ collapses due to heat-dependent depression of f H This study tests the hypothesis that firing rate of sinoatrial pacemaker cells sets the upper thermal limit of f H in vivo. To this end, temperature dependence of action potential (AP) frequency of enzymatically isolated pacemaker cells (pacemaker rate, f PM ), spontaneous beating rate of isolated sinoatrial preparations (f SA ), and in vivo f H of the cold-acclimated (4°C) brown trout (Salmo trutta fario) were compared under acute thermal challenges. With rising temperature, f PM steadily increased because of the acceleration of diastolic depolarization and shortening of AP duration up to the break point temperature (T BP ) of 24.0 ± 0.37°C, at which point the electrical activity abruptly ceased. The maximum f PM at T BP was much higher [193 ± 21.0 beats per minute (bpm)] than the peak f SA (94.3 ± 6.0 bpm at 24.1°C) or peak f H (76.7 ± 2.4 at 15.7 ± 0.82°C) (P brown trout in vivo. Copyright © 2017 the American Physiological Society.

  10. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Science.gov (United States)

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be

  11. Clinical evaluation of a simple uroflowmeter for categorization of maximum urinary flow rate

    Directory of Open Access Journals (Sweden)

    Simon Pridgeon

    2007-01-01

    Full Text Available Objective: To evaluate the accuracy and diagnostic usefulness of a disposable flowmeter consisting of a plastic funnel with a spout divided into three chambers. Materials and Methods: Men with lower urinary tract symptoms (LUTS voided sequentially into a standard flowmeter and the funnel device recording maximum flow rate (Q max and voided volume (V void . The device was precalibrated such that filling of the bottom, middle and top chambers categorized maximum input flows as 15 ml s -1 respectively. Subjects who agreed to use the funnel device at home obtained readings of flow category and V void twice daily for seven days. Results: A single office reading in 46 men using the device showed good agreement with standard measurement of Q max for V void > 150 ml (Kappa = 0.68. All 14 men whose void reached the top chamber had standard Q max > 15 ml s -1 (PPV = 100%, NPV = 72% whilst eight of 12 men whose void remained in the bottom chamber had standard Q max < 10 ml s -1 (PPV = 70%, NPV = 94%. During multiple home use by 14 men the device showed moderate repeatability (Kappa = 0.58 and correctly categorized Q max in comparison to standard measurement for 12 (87% men. Conclusions: This study suggests that the device has sufficient accuracy and reliability for initial flow rate assessment in men with LUTS. The device can provide a single measurement or alternatively multiple home measurements to categorize men with Q max < 15 ml s -1 .

  12. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  13. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  14. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  15. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Directory of Open Access Journals (Sweden)

    Sebastian Höhna

    Full Text Available Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family and thus to maximize diversity (diversified sampling. So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa. The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa. Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model. Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species. All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear

  16. AEROSOL NUCLEATION AND GROWTH DURING LAMINAR TUBE FLOW: MAXIMUM SATURATIONS AND NUCLEATION RATES. (R827354C008)

    Science.gov (United States)

    An approximate method of estimating the maximum saturation, the nucleation rate, and the total number nucleated per second during the laminar flow of a hot vapour–gas mixture along a tube with cold walls is described. The basis of the approach is that the temperature an...

  17. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  18. Test-Retest Reliability of Rating of Perceived Exertion and Agreement With 1-Repetition Maximum in Adults.

    Science.gov (United States)

    Bove, Allyn M; Lynch, Andrew D; DePaul, Samantha M; Terhorst, Lauren; Irrgang, James J; Fitzgerald, G Kelley

    2016-09-01

    Study Design Clinical measurement. Background It has been suggested that rating of perceived exertion (RPE) may be a useful alternative to 1-repetition maximum (1RM) to determine proper resistance exercise dosage. However, the test-retest reliability of RPE for resistance exercise has not been determined. Additionally, prior research regarding the relationship between 1RM and RPE is conflicting. Objectives The purpose of this study was to (1) determine test-retest reliability of RPE related to resistance exercise and (2) assess agreement between percentages of 1RM and RPE during quadriceps resistance exercise. Methods A sample of participants with and without knee pathology completed a series of knee extension exercises and rated the perceived difficulty of each exercise on a 0-to-10 RPE scale, then repeated the procedure 1 to 2 weeks later for test-retest reliability. To determine agreement between RPE and 1RM, participants completed knee extension exercises at various percentages of their 1RM (10% to 130% of predicted 1RM) and rated the perceived difficulty of each exercise on a 0-to-10 RPE scale. Percent agreement was calculated between the 1RM and RPE at each resistance interval. Results The intraclass correlation coefficient indicated excellent test-retest reliability of RPE for quadriceps resistance exercises (intraclass correlation coefficient = 0.895; 95% confidence interval: 0.866, 0.918). Overall percent agreement between RPE and 1RM was 60%, but agreement was poor within the ranges that would typically be used for training (50% 1RM for muscle endurance, 70% 1RM and greater for strength). Conclusion Test-retest reliability of perceived exertion during quadriceps resistance exercise was excellent. However, agreement between the RPE and 1RM was poor, especially in common training zones for knee extensor strengthening. J Orthop Sports Phys Ther 2016;46(9):768-774. Epub 5 Aug 2016. doi:10.2519/jospt.2016.6498.

  19. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  20. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  1. Maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere in a milk producing area

    Energy Technology Data Exchange (ETDEWEB)

    Bryant, P M

    1963-01-01

    A method is given for calculating, for design purposes, the maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere with respect to milk contamination. In the absence of authoritative advice from the Medical Research Council, provisional working levels for the concentration of phosphorus-32 and sulphur-35 in milk are derived, and details are given of the agricultural assumptions involved in the calculation of the relationship between the amount of the nuclide deposited on grassland and that to be found in milk. The agricultural and meteorological conditions assumed are applicable as an annual average to England and Wales. The results (in mc/day) for phosphorus-32 and sulphur-35 for a number of stack heights and distances are shown graphically; typical values, quoted in a table, include 20 mc/day of phosphorus-32 and 30 mc/day of sulfur-35 as the maximum permissible continuous release rates with respect to ground level releases at a distance of 200 metres from pastureland.

  2. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  3. Effects of data sampling rate on image quality in fan-beam-CT system

    International Nuclear Information System (INIS)

    Iwata, Akira; Yamagishi, Nobutoshi; Suzumura, Nobuo; Horiba, Isao.

    1984-01-01

    Investigation was made into the relationship between spatial resolution or artifacts and data sampling rate in order to pursue the causes of the degradation of CT image quality by computer simulation. First the generation of projection data and reconstruction calculating process are described, and then the results are shown about the relation between angular sampling interval and spatical resolution or artifacts, and about the relation between projection data sampling interval and spatial resolution or artifacts. It was clarified that the formulation of the relationship between spatial resolution and data sampling rate performed so far for parallel X-ray beam was able to be applied to fan beam. As a conclusion, when other reconstruction parameters are the same in fan beam CT systems, spatial resolution can be determined by projection data sampling rate rather than angular sampling rate. The mechanism of artifact generation due to the insufficient number of angular samples was made clear. It was also made clear that there was a definite relationship among measuring region, angular sampling rate and projection data sampling rate, and the amount of artifacts depending upon projection data sampling rate was proportional to the amount of spatial frequency components (Aliasing components) of a test object above the Nyquist frequency of projection data. (Wakatsuki, Y.)

  4. Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.

    Science.gov (United States)

    Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N

    2014-01-01

    Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites  =  approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.

  5. Application of HDF5 in long-pulse quasi-steady state data acquisition at high sampling rate

    International Nuclear Information System (INIS)

    Chen, Y.; Wang, F.; Li, S.; Xiao, B.J.; Yang, F.

    2014-01-01

    Highlights: • The new data-acquisition system supports long-pulse EAST data acquisition. • The new data-acquisition system is capable for most of the high frequency signals of EAST experiments. • The system's total throughput is about 500 MB/s. • The system uses HDF5 to store data. - Abstract: A new high sampling rate quasi-steady state data-acquisition system has been designed for the microwave reflectometry diagnostic of EAST experiments. In order to meet the requirements of long-pulse discharge and high sampling rate, it is designed based on PXI Express technology. A high-performance digitizer National Instruments PXIe-5122 with two synchronous analog input channels in which the maximum sampling rate is 100 MHz has been adopted. Two PXIe-5122 boards at 60 MSPS and one PXIe-6368 board at 2 MSPS are used in the system and the total throughput is about 500 MB/s. To guarantee the large amounts of data being saved continuously in the long-pulse discharge, an external hard-disk data stream enclosure NI HDD-8265 in which the capacity of sustained speed of reading and writing is 700 MB/s. And in RAID-5 mode its storage capacity is 80% of the total. The obtained raw data firstly stream continuously into NI HDD-8265 during the discharge. Then it will be transferred to the data server automatically and converted into HDF5 file format. HDF5 is an open source file format for data storage and management which has been widely used in various fields, and suitable for long term case. The details of the system are described in the paper

  6. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  7. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  8. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  9. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Formal comment on: Myhrvold (2016) Dinosaur metabolism and the allometry of maximum growth rate. PLoS ONE; 11(11): e0163205.

    Science.gov (United States)

    Griebeler, Eva Maria; Werner, Jan

    2018-01-01

    In his 2016 paper, Myhrvold criticized ours from 2014 on maximum growth rates (Gmax, maximum gain in body mass observed within a time unit throughout an individual's ontogeny) and thermoregulation strategies (ectothermy, endothermy) of 17 dinosaurs. In our paper, we showed that Gmax values of similar-sized extant ectothermic and endothermic vertebrates overlap. This strongly questions a correct assignment of a thermoregulation strategy to a dinosaur only based on its Gmax and (adult) body mass (M). Contrary, Gmax separated similar-sized extant reptiles and birds (Sauropsida) and Gmax values of our studied dinosaurs were similar to those seen in extant similar-sized (if necessary scaled-up) fast growing ectothermic reptiles. Myhrvold examined two hypotheses (H1 and H2) regarding our study. However, we did neither infer dinosaurian thermoregulation strategies from group-wide averages (H1) nor were our results based on that Gmax and metabolic rate (MR) are related (H2). In order to assess whether single dinosaurian Gmax values fit to those of extant endotherms (birds) or of ectotherms (reptiles), we already used a method suggested by Myhrvold to avoid H1, and we only discussed pros and cons of a relation between Gmax and MR and did not apply it (H2). We appreciate Myhrvold's efforts in eliminating the correlation between Gmax and M in order to statistically improve vertebrate scaling regressions on maximum gain in body mass. However, we show here that his mass-specific maximum growth rate (kC) replacing Gmax (= MkC) does not model the expected higher mass gain in larger than in smaller species for any set of species. We also comment on, why we considered extant reptiles and birds as reference models for extinct dinosaurs and why we used phylogenetically-informed regression analysis throughout our study. Finally, we question several arguments given in Myhrvold in order to support his results.

  11. Data-driven soft sensor design with multiple-rate sampled data

    DEFF Research Database (Denmark)

    Lin, Bao; Recke, Bodil; Knudsen, Jørgen K.H.

    2007-01-01

    Multi-rate systems are common in industrial processes where quality measurements have slower sampling rate than other process variables. Since inter-sample information is desirable for effective quality control, different approaches have been reported to estimate the quality between samples......, including numerical interpolation, polynomial transformation, data lifting and weighted partial least squares (WPLS). Two modifications to the original data lifting approach are proposed in this paper: reformulating the extraction of a fast model as an optimization problem and ensuring the desired model...... properties through Tikhonov Regularization. A comparative investigation of the four approaches is performed in this paper. Their applicability, accuracy and robustness to process noise are evaluated on a single-input single output (SISO) system. The regularized data lifting and WPLS approaches...

  12. Supernova rates from the SUDARE VST-Omegacam search II. Rates in a galaxy sample

    Science.gov (United States)

    Botticella, M. T.; Cappellaro, E.; Greggio, L.; Pignata, G.; Della Valle, M.; Grado, A.; Limatola, L.; Baruffolo, A.; Benetti, S.; Bufano, F.; Capaccioli, M.; Cascone, E.; Covone, G.; De Cicco, D.; Falocco, S.; Haeussler, B.; Harutyunyan, V.; Jarvis, M.; Marchetti, L.; Napolitano, N. R.; Paolillo, M.; Pastorello, A.; Radovich, M.; Schipani, P.; Tomasella, L.; Turatto, M.; Vaccari, M.

    2017-02-01

    Aims: This is the second paper of a series in which we present measurements of the supernova (SN) rates from the SUDARE survey. The aim of this survey is to constrain the core collapse (CC) and Type Ia SN progenitors by analysing the dependence of their explosion rate on the properties of the parent stellar population averaging over a population of galaxies with different ages in a cosmic volume and in a galaxy sample. In this paper, we study the trend of the SN rates with the intrinsic colours, the star formation activity and the masses of the parent galaxies. To constrain the SN progenitors we compare the observed rates with model predictions assuming four progenitor models for SNe Ia with different distribution functions of the time intervals between the formation of the progenitor and the explosion, and a mass range of 8-40 M⊙ for CC SN progenitors. Methods: We considered a galaxy sample of approximately 130 000 galaxies and a SN sample of approximately 50 events. The wealth of photometric information for our galaxy sample allows us to apply the spectral energy distribution (SED) fitting technique to estimate the intrinsic rest frame colours, the stellar mass and star formation rate (SFR) for each galaxy in the sample. The galaxies have been separated into star-forming and quiescent galaxies, exploiting both the rest frame U-V vs. V-J colour-colour diagram and the best fit values of the specific star formation rate (sSFR) from the SED fitting. Results: We found that the SN Ia rate per unit mass is higher by a factor of six in the star-forming galaxies with respect to the passive galaxies, identified as such both on the U-V vs. V-J colour-colour diagram and for their sSFR. The SN Ia rate per unit mass is also higher in the less massive galaxies that are also younger. These results suggest a distribution of the delay times (DTD) less populated at long delay times than at short delays. The CC SN rate per unit mass is proportional to both the sSFR and the galaxy

  13. Dose Rate Calculations for Rotary Mode Core Sampling Exhauster

    CERN Document Server

    Foust, D J

    2000-01-01

    This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.

  14. Dose Rate Calculations for Rotary Mode Core Sampling Exhauster

    International Nuclear Information System (INIS)

    FOUST, D.J.

    2000-01-01

    This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering

  15. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  16. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  17. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  18. Effects of adipose tissue distribution on maximum lipid oxidation rate during exercise in normal-weight women.

    Science.gov (United States)

    Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N

    2014-06-01

    Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  19. Determination of alpha-dose rates and chronostratigraphical study of travertine samples

    International Nuclear Information System (INIS)

    Oufni, L.; University Moulay Ismail, Errachidia; Misdaq, M.A.; Boudad, L.; Kabiri, L.

    2001-01-01

    Uranium and thorium contents in different layers of stratigraphical sedimentary deposits have been evaluated by using LR-115 type II and CR-39 solid state nuclear track detectors (SSNTD). A method has been developed for determining the alpha-dose rates of the sedimentary travertine samples. Using the U/Th dating method, we succeeded in age dating carbonated level sampled in the sedimentary deposits. Correlation between the stratigraphy, alpha-dose rates and age has been investigated. (author)

  20. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  1. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  2. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  3. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  4. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  5. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  6. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  7. The mechanics of granitoid systems and maximum entropy production rates.

    Science.gov (United States)

    Hobbs, Bruce E; Ord, Alison

    2010-01-13

    A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate. This journal is © 2010 The Royal Society

  8. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  9. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  10. A simulation study of Linsley's approach to infer elongation rate and fluctuations of the EAS maximum depth from muon arrival time distributions

    International Nuclear Information System (INIS)

    Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.

    1999-01-01

    The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle

  11. Method of measuring the disinteration rate of beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1977-01-01

    A method of measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample by counting at least two differently quenched versions of the sample is described. In each counting operation the sample is counted in the presence of and in the absence of a standard radioactive source. A pulse height (PH) corresponding to a unique point on the pulse height spectrum generated in the presence of the standard is determined. A zero threshold sample count rate (CPM) is derived by counting the sample once in a counting window having a zero threshold lower limit. Normalized values of the measured pulse heights (PH) are developed and correlated with the corresponding pulse counts (CPM) to determine the pulse count for a normalized pulse height value of zero and hence the sample disintegration rate

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  13. Influence of heart rate in nonlinear HRV indices as a sampling rate effect evaluated on supine and standing

    Directory of Open Access Journals (Sweden)

    Juan Bolea

    2016-11-01

    Full Text Available The purpose of this study is to characterize and attenuate the influence of mean heart rate (HR on nonlinear heart rate variability (HRV indices (correlation dimension, sample and approximate entropy as a consequence of being the HR the intrinsic sampling rate of HRV signal. This influence can notably alter nonlinear HRV indices and lead to biased information regarding autonomic nervous system (ANS modulation.First, a simulation study was carried out to characterize the dependence of nonlinear HRV indices on HR assuming similar ANS modulation. Second, two HR-correction approaches were proposed: one based on regression formulas and another one based on interpolating RR time series. Finally, standard and HR-corrected HRV indices were studied in a body position change database.The simulation study showed the HR-dependence of non-linear indices as a sampling rate effect, as well as the ability of the proposed HR-corrections to attenuate mean HR influence. Analysis in a body position changes database shows that correlation dimension was reduced around 21% in median values in standing with respect to supine position (p < 0.05, concomitant with a 28% increase in mean HR (p < 0.05. After HR-correction, correlation dimension decreased around 18% in standing with respect to supine position, being the decrease still significant. Sample and approximate entropy showed similar trends.HR-corrected nonlinear HRV indices could represent an improvement in their applicability as markers of ANS modulation when mean HR changes.

  14. DEVELOPING AN EXCELLENT SEDIMENT RATING CURVE FROM ONE HYDROLOGICAL YEAR SAMPLING PROGRAMME DATA: APPROACH

    Directory of Open Access Journals (Sweden)

    Preksedis M. Ndomba

    2008-01-01

    Full Text Available This paper presents preliminary findings on the adequacy of one hydrological year sampling programme data in developing an excellent sediment rating curve. The study case is a 1DD1 subcatchment in the upstream of Pangani River Basin (PRB, located in the North Eastern part of Tanzania. 1DD1 is the major runoff-sediment contributing tributary to the downstream hydropower reservoir, the Nyumba Ya Mungu (NYM. In literature sediment rating curve method is known to underestimate the actual sediment load. In the case of developing countries long-term sediment sampling monitoring or conservation campaigns have been reported as unworkable options. Besides, to the best knowledge of the authors, to date there is no consensus on how to develop an excellent rating curve. Daily-midway and intermittent-cross section sediment samples from Depth Integrating sampler (D-74 were used to calibrate the subdaily automatic sediment pumping sampler (ISCO 6712 near bank point samples for developing the rating curve. Sediment load correction factors were derived from both statistical bias estimators and actual sediment load approaches. It should be noted that the ongoing study is guided by findings of other studies in the same catchment. For instance, long term sediment yield rate estimated based on reservoir survey validated the performance of the developed rating curve. The result suggests that excellent rating curve could be developed from one hydrological year sediment sampling programme data. This study has also found that uncorrected rating curve underestimates sediment load. The degreeof underestimation depends on the type of rating curve developed and data used.

  15. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  16. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  17. Measurements of astrophysical reaction rates for radioactive samples

    International Nuclear Information System (INIS)

    Koehler, P.E.; O'Brien, H.A.; Bowman, C.D.

    1987-01-01

    Reaction rates for both big-bang and stellar nucleosynthesis can be obtained from the measurement of (n,p) and (n,γ) cross sections for radioactive nuclei. In the past, large backgrounds associated with the sample activity limited these types of measurements to radioisotopes with very long half lives. The advent of the low-energy, high-intensity neutron source at the Los Alamos Neutron Scattering CEnter (LANSCE) has greatly increased the number of nuclei which can be studied. Results of (n,p) measurements on samples with half lives as short as fifty-three days will be given. The astrophysics to be learned from these data will be discussed. Additional difficulties are encountered when making (n,γ) rather than (n,p) measurements. However, with a properly designed detector, and with the high peak neutron intensities now available, (n,γ) measurements can be made for nuclei with half lives as short as several weeks. Progress on the Los Alamos (n,γ) cross-section measurement program for radioactive samples will be discussed. 25 refs., 3 figs., 1 tab

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  19. Method for measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    1977-01-01

    A method of measuring the distintegration rate of a beta-emitting radionuclide in a liquid sample by counting at least two differently quenched versions of the sample. In each counting operation the sample is counted in the presence of and in the absence of a standard radioactive source. A pulse height (PH) corresponding to a unique point on the pulse height spectrum generated in the presence of the standard is determined. A zero threshold sample count rate (CPM) is derived by counting the sample once in a counting window having a zero threshold lower limit. Normalized values of the measured pulse heights (PH) are developed and correlated with the corresponding counts (CPM) to determine the pulse count for a normalized pulse height value of zero and hence the sample disintegration rate

  20. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    Science.gov (United States)

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using

  1. Trapped field measurements on MgB{sub 2} bulk samples

    Energy Technology Data Exchange (ETDEWEB)

    Koblischka, Michael; Karwoth, Thomas; Zeng, XianLin; Hartmann, Uwe [Institute of Experimental Physics, Saarland University, P. O. Box 151150, D-66041 Saarbruecken (Germany); Berger, Kevin; Douine, Bruno [University of Lorraine, GREEN, 54506 Vandoeuvre-les-Nancy (France)

    2016-07-01

    Trapped field measurements were performed on bulk, polycrystalline MgB{sub 2} samples stemming from different sources with the emphasis to develop applications like superconducting permanent magnets ('supermagnets') and electric motors. We describe the setup for the trapped field measurements and the experimental procedure (field cooling, zero-field cooling, field sweep rates). The trapped field measurements were conducted using a cryocooling system to cool the bulk samples to the desired temperatures, and a low-loss cryostat equipped with a room-temperature bore and a maximum field of ±5 T was employed to provide the external magnetic field. The superconducting coil of this cryostat is operated using a bidirectional power supply. Various sweep rates of the external magnetic field ranging between 1 mT/s and 40 mT/s were used to generate the applied field. The measurements were performed with one sample and two samples stacked together. A maximum trapped field of 7 T was recorded. We discuss the results obtained and the problems arising due to flux jumping, which is often seen for the MgB{sub 2} samples cooled to temperatures below 10 K.

  2. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  3. Method for measuring the decay rate of a radionuclide emitting β-rays in a liquid sample

    International Nuclear Information System (INIS)

    Horrocks, D.

    1977-01-01

    With this method the decay rate of a radionuclide emitting β-rays, e.g. 3 H or 14 C, in a liquid sample can be measured by means of liquid scintillation counters, at least two different versions of the sample being used with quench effect (shifting of the Compton spectrum). For this purpose each sample is counted with and without a radioactive standard source, e.g. 137 Cs. Then a pulse height will be determined corresponding to a selected point in the pulse height spectrum if the standard source is present. The determination of a zero-threshold sample count rate is then performed by counting the sample in a counting window. In addition standardized values of the measured pulse heights are derived and put in mathematical relation to corresponding pulse count rates, the pulse count rate for a standardized pulse height value thus becoming zero and the sample decay rate in this way being determined. (DG) 891 HP [de

  4. Measurement of radon exhalation rate in various building materials and soil samples

    Science.gov (United States)

    Bala, Pankaj; Kumar, Vinod; Mehra, Rohit

    2017-03-01

    Indoor radon is considered as one of the potential dangerous radioactive elements. Common building materials and soil are the major source of this radon gas in the indoor environment. In the present study, the measurement of radon exhalation rate in the soil and building material samples of Una and Hamirpur districts of Himachal Pradesh has been done with solid state alpha track detectors, LR-115 type-II plastic track detectors. The radon exhalation rate for the soil samples varies from 39.1 to 91.2 mBq kg-1 h-1 with a mean value 59.7 mBq kg-1 h-1. Also the radium concentration of the studied area is found and it varies from 30.6 to 51.9 Bq kg-1 with a mean value 41.6 Bq kg-1. The exhalation rate for the building material samples varies from 40.72 (sandstone) to 81.40 mBq kg-1 h-1 (granite) with a mean value of 59.94 mBq kg-1 h-1.

  5. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  6. Maximum Evaporation Rates of Water Droplets Approaching Obstacles in the Atmosphere Under Icing Conditions

    Science.gov (United States)

    Lowell, H. H.

    1953-01-01

    When a closed body or a duct envelope moves through the atmosphere, air pressure and temperature rises occur ahead of the body or, under ram conditions, within the duct. If cloud water droplets are encountered, droplet evaporation will result because of the air-temperature rise and the relative velocity between the droplet and stagnating air. It is shown that the solution of the steady-state psychrometric equation provides evaporation rates which are the maximum possible when droplets are entrained in air moving along stagnation lines under such conditions. Calculations are made for a wide variety of water droplet diameters, ambient conditions, and flight Mach numbers. Droplet diameter, body size, and Mach number effects are found to predominate, whereas wide variation in ambient conditions are of relatively small significance in the determination of evaporation rates. The results are essentially exact for the case of movement of droplets having diameters smaller than about 30 microns along relatively long ducts (length at least several feet) or toward large obstacles (wings), since disequilibrium effects are then of little significance. Mass losses in the case of movement within ducts will often be significant fractions (one-fifth to one-half) of original droplet masses, while very small droplets within ducts will often disappear even though the entraining air is not fully stagnated. Wing-approach evaporation losses will usually be of the order of several percent of original droplet masses. Two numerical examples are given of the determination of local evaporation rates and total mass losses in cases involving cloud droplets approaching circular cylinders along stagnation lines. The cylinders chosen were of 3.95-inch (10.0+ cm) diameter and 39.5-inch 100+ cm) diameter. The smaller is representative of icing-rate measurement cylinders, while with the larger will be associated an air-flow field similar to that ahead of an airfoil having a leading-edge radius

  7. 78 FR 13999 - Maximum Interest Rates on Guaranteed Farm Loans

    Science.gov (United States)

    2013-03-04

    ..., cost-plus, flat-rate, or market based) to price guaranteed loans, provided the rates do not exceed the... (LIBOR) or the 5-year Treasury note rate, unless the lender uses a formal written risk-based pricing... cost in the form of a lower interest rate than the borrower would otherwise receive. Therefore, the FSA...

  8. Study of Acoustic Emission and Mechanical Characteristics of Coal Samples under Different Loading Rates

    Directory of Open Access Journals (Sweden)

    Huamin Li

    2015-01-01

    Full Text Available To study the effect of loading rate on mechanical properties and acoustic emission characteristics of coal samples, collected from Sanjiaohe Colliery, the uniaxial compression tests are carried out under various levels of loading rates, including 0.001 mm/s, 0.002 mm/s, and 0.005 mm/s, respectively, using AE-win E1.86 acoustic emission instrument and RMT-150C rock mechanics test system. The results indicate that the loading rate has a strong impact on peak stress and peak strain of coal samples, but the effect of loading rate on elasticity modulus of coal samples is relatively small. When the loading rate increases from 0.001 mm/s to 0.002 mm/s, the peak stress increases from 22.67 MPa to 24.99 MPa, the incremental percentage is 10.23%, and under the same condition the peak strain increases from 0.006191 to 0.007411 and the incremental percentage is 19.71%. Similarly, when the loading rate increases from 0.002 mm/s to 0.005 mm/s, the peak stress increases from 24.99 MPa to 28.01 MPa, the incremental percentage is 12.08%, the peak strain increases from 0.007411 to 0.008203, and the incremental percentage is 10.69%. The relationship between acoustic emission and loading rate presents a positive correlation, and the negative correlation relation has been determined between acoustic emission cumulative counts and loading rate during the rupture process of coal samples.

  9. Low-sampling-rate M-ary multiple access UWB communications in multipath channels

    KAUST Repository

    Alkhodary, Mohammad T.

    2015-08-31

    The desirable characteristics of ultra-wideband (UWB) technology are challenged by formidable sampling frequency, performance degradation in the presence of multi-user interference, and complexity of the receiver due to the channel estimation process. In this paper, a low-rate-sampling technique is used to implement M-ary multiple access UWB communications, in both the detection and channel estimation stages. A novel approach is used for multiple-access-interference (MAI) cancelation for the purpose of channel estimation. Results show reasonable performance of the proposed receiver for different number of users operating many times below Nyquist rate.

  10. Low-sampling-rate M-ary multiple access UWB communications in multipath channels

    KAUST Repository

    Alkhodary, Mohammad T.; Ballal, Tarig; Al-Naffouri, Tareq Y.; Muqaibel, Ali H.

    2015-01-01

    The desirable characteristics of ultra-wideband (UWB) technology are challenged by formidable sampling frequency, performance degradation in the presence of multi-user interference, and complexity of the receiver due to the channel estimation process. In this paper, a low-rate-sampling technique is used to implement M-ary multiple access UWB communications, in both the detection and channel estimation stages. A novel approach is used for multiple-access-interference (MAI) cancelation for the purpose of channel estimation. Results show reasonable performance of the proposed receiver for different number of users operating many times below Nyquist rate.

  11. Time clustered sampling can inflate the inferred substitution rate in foot-and-mouth disease virus analyses

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.

    2015-01-01

    abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...

  12. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    Science.gov (United States)

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  13. Multi-rate cubature Kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system.

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng

    2017-08-01

    This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.

  14. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  15. Experimental technique to measure thoron generation rate of building material samples using RAD7 detector

    International Nuclear Information System (INIS)

    Csige, I.; Szabó, Zs.; Szabó, Cs.

    2013-01-01

    Thoron ( 220 Rn) is the second most abundant radon isotope in our living environment. In some dwellings it is present in significant amount which calls for its identification and remediation. Indoor thoron originates mainly from building materials. In this work we have developed and tested an experimental technique to measure thoron generation rate in building material samples using RAD7 radon-thoron detector. The mathematical model of the measurement technique provides the thoron concentration response of RAD7 as a function of the sample thickness. For experimental validation of the technique an adobe building material sample was selected for measuring the thoron concentration at nineteen different sample thicknesses. Fitting the parameters of the model to the measurement results, both the generation rate and the diffusion length of thoron was estimated. We have also determined the optimal sample thickness for estimating the thoron generation rate from a single measurement. -- Highlights: • RAD7 is used for the determination of thoron generation rate (emanation). • The described model takes into account the thoron decay and attenuation. • The model describes well the experimental results. • A single point measurement method is offered at a determined sample thickness

  16. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  17. Radon mass exhalation rate in soil samples at South Bengaluru city, Karnataka, India

    International Nuclear Information System (INIS)

    Poojitha, C.G.; Pranesha, T.S.; Ganesh, K.E.; Sahoo, B.K.; Sapra, B.K.

    2017-01-01

    Radon mass exhalation rate in soil samples collected from different locations of South Bengaluru city were measured using scintillation based Smart radon thoron monitor (RnDuo). It has been observed that the mass exhalation rate estimated due to presence of radon concentration in soil samples ranges from 39.18 - 265.58 mBq/kg/h with an average value of 115.64 mBq/kg/h. Finally we compare our results with similar investigation from different parts of India. (author)

  18. Analytical methods of leakage rate estimation from a containment under a LOCA

    International Nuclear Information System (INIS)

    Chun, M.H.

    1981-01-01

    Three most outstanding maximum flow rate formulas are identified from many existing models. Outlines of the three limiting mass flow rate models are given along with computational procedures to estimate approximate amount of fission products released from a containment to environment for a given characteristic hole size for containment-isolation failure and containment pressure and temperature under a loss of coolant accident. Sample calculations are performed using the critical ideal gas flow rate model and the Moody's graphs for the maximum two-phase flow rates, and the results are compared with the values obtained from then mass leakage rate formula of CONTEMPT-LT code for converging nozzle and sonic flow. It is shown that the critical ideal gas flow rate formula gives almost comparable results as one can obtain from the Moody's model. It is also found that a more conservative approach to estimate leakage rate from a containment under a LOCA is to use the maximum ideal gas flow rate equation rather than the mass leakage rate formula of CONTEMPT-LT. (author)

  19. Effect of milk sample delivery methods and arrival conditions on bacterial contamination rates.

    Science.gov (United States)

    Dinsmore, R P; English, P B; Matthews, J C; Sears, P M

    1990-07-01

    A cross sectional study was performed of factors believed to contribute to the contamination of bovine milk sample cultures submitted to the Ithaca Regional Laboratory of the Quality Milk Promotion Services/New York State Mastitis Control. Of 871 samples entered in the study, 137 (15.7%) were contaminated. There were interactions between the sample source (veterinarian vs dairyman), delivery method, and time between sample collection and arrival at the laboratory. If only those samples collected and hand delivered by the dairyman within 1 day of collection were compared to a like subset of samples collected and hand delivered by veterinarians, no statistically significant differences in milk sample contamination rate (MSCR) were found. Samples were delivered to the laboratory by hand, US Postal Service, United Parcel Service, via the New York State College of Veterinary Medicine Diagnostic Laboratory, or Northeast Dairy Herd Improvement Association Courier. The MSCR was only 7.6% for hand delivered samples, while 26% of Postal Service samples were contaminated. These rates differed significantly from other delivery methods (P less than 0.0001). The USPS samples arrived a longer time after sampling than did samples sent by other routes, and time had a significant effect on MSCR (0 to 1 day, 8.9%; greater than 1 day, 25.9%; P less than 0.01). Samples packaged with ice packs sent by routes other than the Postal Service had a lower MSCR than those not packaged with ice packs, but ice packs did not reduce the MSCR for samples sent by the Postal Service.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  1. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  2. Radium and radon exhalation rate in soil samples of Hassan district of South Karnataka, India

    International Nuclear Information System (INIS)

    Jagadeesha, B.G.; Narayana, Y.

    2016-01-01

    The radon exhalation rate was measured in 32 soil samples collected from Hassan district of South Karnataka. Radon exhalation rate of soil samples was measured using can technique. The results show variation of radon exhalation rate with radium content of the soil samples. A strong correlation was observed between effective radium content and radon exhalation rate. In the present work, an attempt was made to assess the levels of radon in the environment of Hassan. Radon activities were found to vary from 2.25±0.55 to 270.85±19.16 Bq m"-"3 and effective radium contents vary from 12.06±2.98 to 1449.56±102.58 mBq kg"-"1. Surface exhalation rates of radon vary from 1.55±0.47 to 186.43±18.57 mBq m"-"2 h"-"1, and mass exhalation rates of radon vary from 0.312±0.07 to 37.46±2.65 mBq kg"-"1 h"-"1. (authors)

  3. Petroleum production at Maximum Efficient Rate Naval Petroleum Reserve No. 1 (Elk Hills), Kern County, California

    International Nuclear Information System (INIS)

    1993-07-01

    This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government's interest is approximately 78% and CUSA's interest is approximately 22%. The government's interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS)

  4. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  5. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  6. Study of sampling rate influence on neutron-gamma discrimination with stilbene coupled to a silicon photomultiplier.

    Science.gov (United States)

    Zhang, Jinglong; Moore, Michael E; Wang, Zhonghai; Rong, Zhou; Yang, Chaowen; Hayward, Jason P

    2017-10-01

    Choosing a digitizer with an appropriate sampling rate is often a trade-off between performance and economy. The influence of sampling rates on the neutron-gamma Pulse Shape Discrimination (PSD) with a solid stilbene scintillator coupled to a Silicon Photomultiplier was investigated in this work. Sampling rates from 125MSPS to 2GSPS from a 10-bit digitizer were used to collect detector pulses produced by the interactions of a Cf-252 source. Due to the decreased signal-to-noise ratio (SNR), the PSD performance degraded with reduced sampling rates. The reason of PSD performance degradation was discussed. Then, an efficient combination of filtering and digital signal processing (DSP) was then applied to suppress the timing noise and electronic background noise. The results demonstrate an improved PSD performance especially at low sampling rates, down to 125MSPS. Using filtering and DSP, the ascribed Figure of Merit (FOM) at 125keV ee (± 10keV ee ) increased from 0.95 to 1.02 at 125MSPS. At 300keV ee and above, all the FOMs are better than 2.00. Our study suggests that 250MSPS is a good enough sampling rate for neutron-gamma discrimination in this system in order to be sensitive to neutrons at and above ~ 125keV ee . Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  8. Sampling and chemical analysis of urban street runoff

    International Nuclear Information System (INIS)

    Daub, J.; Striebel, T.; Robien, A.; Herrmann, R.

    1993-01-01

    In order to characterize the environmentally relevant physical and chemical properties of urban street runoff, an automatic sampling device was developed. Precipitation samples were collected together with runoff samples. Organic and inorganic compounds were analysed in the runoff. Dissolved and particle bound substances were analysed separately. The concentrations in runoff are generally considerably higher than in precipitation. Concentrations of lead, fluoranthene and benzo(a)pyrene, in particular are higher in runoffs at sites with high traffic densities than at sites with low traffic densities. Preceding dry period normally has no effect on the measured concentrations. The typical chemograph of a dissolved substance shows a maximum at the beginning of the event dropping quickly to a minimum, which often coincides with the maximum in runoff rate. A slight rise is observed with decreasing runoff rates at the end of the event. Applying a mathematical model, chemographs may be described by three terms: - Relatively large amounts of easily soluble material at the beginning of the event decrease with increasing runoff. Conservative behaviour is assumed. - A part which varies inversely to the runoff rate. This term assumes zero-order kinetics; the amount dissolved from surfaces is constant with time. - A small constant term. Concentrations of particle bound substances correlate with amounts of total suspended solids. Frequently a negative correlation between the specific concentration of substances and the concentration of total suspended solids is observed. (orig.) [de

  9. The RBANS Effort Index: base rates in geriatric samples.

    Science.gov (United States)

    Duff, Kevin; Spering, Cynthia C; O'Bryant, Sid E; Beglinger, Leigh J; Moser, David J; Bayless, John D; Culp, Kennith R; Mold, James W; Adams, Russell L; Scott, James G

    2011-01-01

    The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.

  10. Should measurement of maximum urinary flow rate and residual urine volume be a part of a "minimal care" assessment programme in female incontinence?

    DEFF Research Database (Denmark)

    Sander, Pia; Mouritsen, L; Andersen, J Thorup

    2002-01-01

    OBJECTIVE: The aim of this study was to evaluate the value of routine measurements of urinary flow rate and residual urine volume as a part of a "minimal care" assessment programme for women with urinary incontinence in detecting clinical significant bladder emptying problems. MATERIAL AND METHODS....... Twenty-six per cent had a maximum flow rate less than 15 ml/s, but only 4% at a voided volume > or =200 ml. Residual urine more than 149 ml was found in 6%. Two women had chronic retention with overflow incontinence. Both had typical symptoms with continuous leakage, stranguria and chronic cystitis...

  11. TU-FG-209-03: Exploring the Maximum Count Rate Capabilities of Photon Counting Arrays Based On Polycrystalline Silicon

    Energy Technology Data Exchange (ETDEWEB)

    Liang, A K; Koniczek, M; Antonuk, L E; El-Mohri, Y; Zhao, Q [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailed circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit

  12. The mineralogical phase transformation of invisible gold-concentrate by microwave heating, and enhancement of their gold leaching rate

    Science.gov (United States)

    Bak, Geonyoung; Kim, Bongju; Choi, Nagchoul; Park*, Cheonyoung

    2015-04-01

    In this study, in order to obtain the maximum Au leaching rate, an invisible gold concentrate sample was microwave-treated and a thiourea leaching experiment was performed. It is found that gold exists as invisible as a result of observation with an optical microscope and an electron microscope. As the invisible gold concentrate sample was exposed to microwave longer, its temperature and weight loss were increased together and its S content was decreased. The conditions for the maximum Au leaching rate and the fast leaching effect were a particle size of -325×400 mesh, exposure to microwave for 70 minutes, 1.0 g of thiourea, 0.0504 g of sodium sulfite and 0.425 g of ferric sulfate. However, the condition under which Au was leached out to the maximum was applied to the control sample, but its Au leaching rate was just in a range of 78% to 88%. Such results suggest that the effect of sodium sulfite and ferric sulfate was more effective in the microwave-treated sample than in the control sample. Therefore, it was confirmed that the complete and very fast Au leaching can be achieved by means of the microwave pretreatment of invisible gold concentrate.

  13. Constant strain rate experiments and constitutive modeling for a class of bitumen

    Science.gov (United States)

    Reddy, Kommidi Santosh; Umakanthan, S.; Krishnan, J. Murali

    2012-08-01

    The mechanical properties of bitumen vary with the nature of the crude source and the processing methods employed. To understand the role of the processing conditions played in the mechanical properties, bitumen samples derived from the same crude source but processed differently (blown and blended) are investigated. The samples are subjected to constant strain rate experiments in a parallel plate rheometer. The torque applied to realize the prescribed angular velocity for the top plate and the normal force applied to maintain the gap between the top and bottom plate are measured. It is found that when the top plate is held stationary, the time taken by the torque to be reduced by a certain percentage of its maximum value is different from the time taken by the normal force to decrease by the same percentage of its maximum value. Further, the time at which the maximum torque occurs is different from the time at which the maximum normal force occurs. Since the existing constitutive relations for bitumen cannot capture the difference in the relaxation times for the torque and normal force, a new rate type constitutive model, incorporating this response, is proposed. Although the blended and blown bitumen samples used in this study correspond to the same grade, the mechanical responses of the two samples are not the same. This is also reflected in the difference in the values of the material parameters in the model proposed. The differences in the mechanical properties between the differently processed bitumen samples increase further with aging. This has implications for the long-term performance of the pavement.

  14. Developments in the Frequency of Ratings and Evaluation Tendencies: A Review of German Physician Rating Websites.

    Science.gov (United States)

    McLennan, Stuart; Strech, Daniel; Reimann, Swantje

    2017-08-25

    Physician rating websites (PRWs) have been developed to allow all patients to rate, comment, and discuss physicians' quality online as a source of information for others searching for a physician. At the beginning of 2010, a sample of 298 randomly selected physicians from the physician associations in Hamburg and Thuringia were searched for on 6 German PRWs to examine the frequency of ratings and evaluation tendencies. The objective of this study was to examine (1) the number of identifiable physicians on German PRWs; (2) the number of rated physicians on German PRWs; (3) the average and maximum number of ratings per physician on German PRWs; (4) the average rating on German PRWs; (5) the website visitor ranking positions of German PRWs; and (6) how these data compare with 2010 results. A random stratified sample of 298 selected physicians from the physician associations in Hamburg and Thuringia was generated. Every selected physician was searched for on the 6 PRWs (Jameda, Imedo, Docinsider, Esando, Topmedic, and Medführer) used in the 2010 study and a PRW, Arztnavigator, launched by Allgemeine Ortskrankenkasse (AOK). The results were as follows: (1) Between 65.1% (194/298) on Imedo to 94.6% (282/298) on AOK-Arztnavigator of the physicians were identified on the selected PRWs. (2) Between 16.4% (49/298) on Esando to 83.2% (248/298) on Jameda of the sample had been rated at least once. (3) The average number of ratings per physician ranged from 1.2 (Esando) to 7.5 (AOK-Arztnavigator). The maximum number of ratings per physician ranged from 3 (Esando) to 115 (Docinsider), indicating an increase compared with the ratings of 2 to 27 in the 2010 study sample. (4) The average converted standardized rating (1=positive, 2=neutral, and 3=negative) ranged from 1.0 (Medführer) to 1.2 (Jameda and Topmedic). (5) Only Jameda (position 317) and Medführer (position 9796) were placed among the top 10,000 visited websites in Germany. Whereas there has been an overall increase in

  15. The tropical lapse rate steepened during the Last Glacial Maximum

    NARCIS (Netherlands)

    Loomis, S.E.; Russell, J.M.; Verschuren, D.; Morrill, C.; De Cort, G.; Sinninghe Damsté, J.S.; Olago, D.; Eggermont, H.; Street-Perrott, F.A.; Kelly, M.A.

    2017-01-01

    The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become lesssteep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountainenvironments. However, the sensitivity of the lapse rate to climate

  16. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  17. The tropical lapse rate steepened during the Last Glacial Maximum

    NARCIS (Netherlands)

    Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A

    The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate

  18. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    International Nuclear Information System (INIS)

    Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential

  19. Proposing an Empirically Justified Reference Threshold for Blood Culture Sampling Rates in Intensive Care Units

    Science.gov (United States)

    Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.

    2014-01-01

    Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442

  20. Experimental procedure for the determination of counting efficiency and sampling flow rate of a grab-sampling working level meter

    International Nuclear Information System (INIS)

    Grenier, M.; Bigu, J.

    1982-07-01

    The calibration procedures used for a working level meter (WLM) of the grab-sampling type are presented in detail. The WLM tested is a Pylon WL-1000C working level meter and it was calibrated for radon/thoron daughter counting efficiency (E), for sampling pump flow rate (Q) and other variables of interest. For the instrument calibrated at the Elliot Lake Laboratory, E was 0.22 +- 0.01 while Q was 4.50 +- 0.01 L/min

  1. Investigation of the Maximum Spin-Up Coefficients of Friction Obtained During Tests of a Landing Gear Having a Static-Load Rating of 20,000 Pounds

    Science.gov (United States)

    Batterson, Sidney A.

    1959-01-01

    An experimental investigation was made at the Langley landing loads track to obtain data on the maximum spin-up coefficients of friction developed by a landing gear having a static-load rating of 20,000 pounds. The forward speeds ranged from 0 to approximately 180 feet per second and the sinking speeds, from 2.7 feet per second to 9.4 feet per second. The results indicated the variation of the maximum spin-up coefficient of friction with forward speed and vertical load. Data obtained during this investigation are also compared with some results previously obtained for nonrolling tires to show the effect of forward speed.

  2. Pressure Stimulated Currents (PSCin marble samples

    Directory of Open Access Journals (Sweden)

    F. Vallianatos

    2004-06-01

    Full Text Available The electrical behaviour of marble samples from Penteli Mountain was studied while they were subjected to uniaxial stress. The application of consecutive impulsive variations of uniaxial stress to thirty connatural samples produced Pressure Stimulated Currents (PSC. The linear relationship between the recorded PSC and the applied variation rate was investigated. The main results are the following: as far as the samples were under pressure corresponding to their elastic region, the maximum PSC value obeyed a linear law with respect to pressure variation. In the plastic region deviations were observed which were due to variations of Young s modulus. Furthermore, a special burst form of PSC recordings during failure is presented. The latter is emitted when irregular longitudinal splitting is observed during failure.

  3. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  4. System Identification of a Non-Uniformly Sampled Multi-Rate System in Aluminium Electrolysis Cells

    Directory of Open Access Journals (Sweden)

    Håkon Viumdal

    2014-07-01

    Full Text Available Standard system identification algorithms are usually designed to generate mathematical models with equidistant sampling instants, that are equal for both input variables and output variables. Unfortunately, real industrial data sets are often disrupted by missing samples, variations of sampling rates in the different variables (also known as multi-rate systems, and intermittent measurements. In industries with varying events based maintenance or manual operational measures, intermittent measurements are performed leading to uneven sampling rates. Such is the case with aluminium smelters, where in addition the materials fed into the cell create even more irregularity in sampling. Both measurements and feeding are mostly manually controlled. A simplified simulation of the metal level in an aluminium electrolysis cell is performed based on mass balance considerations. System identification methods based on Prediction Error Methods (PEM such as Ordinary Least Squares (OLS, and the sub-space method combined Deterministic and Stochastic system identification and Realization (DSR, and its variants are applied to the model of a single electrolysis cell as found in the aluminium smelters. Aliasing phenomena due to large sampling intervals can be crucial in avoiding unsuitable models, but with knowledge about the system dynamics, it is easier to optimize the sampling performance, and hence achieve successful models. The results based on the simulation studies of molten aluminium height in the cells using the various algorithms give results which tally well with the synthetic data sets used. System identification on a smaller data set from a real plant is also implemented in this work. Finally, some concrete suggestions are made for using these models in the smelters.

  5. Comparison of leach results from field and laboratory prepared samples

    International Nuclear Information System (INIS)

    Oblath, S.B.; Langton, C.A.

    1985-01-01

    The leach behavior of saltstone prepared in the laboratory agrees well with that from samples mixed in the field using the Littleford mixer. Leach rates of nitrates and cesium from the current reference formulation saltstone were compared. The laboratory samples were prepared using simulated salt solution; those in the field used Tank 50 decontaminated supernate. For both nitrate and cesium, the field and laboratory samples showed nearly identical leach rates for the first 30 to 50 days. For the remaining period of the test, the field samples showed higher leach rates with the maximum difference being less than a factor of three. Ruthenium and antimony were present in the Tank 50 supernate in known amounts. Antimony-125 was observed in the leachate and a fractional leach rate was calculated to be at least a factor of ten less than that of 137 Cs. No 106 Ru was observed in the leachate, and the release rate was not calculated. However, based on the detection limits for the analysis, the ruthenium leach rate must also be at least a factor of ten less than cesium. These data are the first measurements of the leach rates of Ru and Sb from saltstone. The nitrate leach rates for these samples were 5 x 10 -5 grams of nitrate per square cm per day after 100 days for the laboratory samples and after 200 days for the field samples. These values are consistent with the previously measured leach rates for reference formulation saltstone. The relative standard deviation in the leach rate is about 15% for the field samples, which all were produced from one batch of saltstone, and about 35% for the laboratory samples, which came from different batches. These are the first recorded estimates of the error in leach rates for saltstone

  6. Relationship between accuracy and number of samples on statistical quantity and contour map of environmental gamma-ray dose rate. Example of random sampling

    International Nuclear Information System (INIS)

    Matsuda, Hideharu; Minato, Susumu

    2002-01-01

    The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)

  7. To the elementary theory of critical (maximum) flow rate of two-phase mixture in channels with various sections

    International Nuclear Information System (INIS)

    Nigmatulin, B.I.; Soplenkov, K.I.

    1978-01-01

    On the basis of the concepts of two-phase dispersive flow with various structures (bubble, vapour-drop etc) in the framework of the two-speed and two-temperature one-dimension stationary model of the current with provision for phase transitions the conditions, under which a critical (maximum) flow rate of two-phase mixture is achieved during its outflowing from a channel with the pre-set geometry, have been determined. It is shown, that for the choosen set of two-phase flow equations with the known parameters of deceleration and structure one of the critical conditions is satisfied: either solution of the set of equations corresponding a critical flow rate is a special one, i.e. passes through a special point locating between minimum and outlet channel sections where the carrying phase velocity approaches the value of decelerated sound speed in the mixture or the determinator of the initial set of equations equals zero for the outlet channel sections, i.e. gradients of the main flow parameters tend to +-infinity in this section, and carrying phase velocity also approaches the value of the decelerated sound velocity in the mixture

  8. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  9. Calibration of track detectors and measurement of radon exhalation rate from solid samples

    International Nuclear Information System (INIS)

    Singh, Ajay Kumar; Jojo, P.J.; Prasad, Rajendra; Khan, A.J.; Ramachandran, T.V.

    1997-01-01

    CR-39 and LR-115 type II track detectors to be used for radon exhalation measurements have been calibrated. The configurations fitted with detectors in Can technique in the open cup mode are cylindrical plastic cup (PC) and conical plastic cup (CPC). The experiment was performed in radon exposure chamber having monodisperse aerosols of 0.2 μm size, to find the relationship between track density and the radon concentration. The calibration factors for PC and CPC type dosimeters with LR-115 type II detector were found to be 0.056 and 0.083 tracks cm -2 d -1 (Bqm -3 ) -1 respectively, while with CR-39 detector the values were 0.149 and 0.150 tracks cm -2 d -1 (Bq m -3 ) -1 . Employing the Can technique, measurements of exhalation rates from solid samples used as construction materials, are undertaken. Radon exhalation rate is found to be minimum in cement samples while in fly ash it is not enhanced as compared to coal samples. (author)

  10. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  11. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  12. GAS SURFACE DENSITY, STAR FORMATION RATE SURFACE DENSITY, AND THE MAXIMUM MASS OF YOUNG STAR CLUSTERS IN A DISK GALAXY. II. THE GRAND-DESIGN GALAXY M51

    International Nuclear Information System (INIS)

    González-Lópezlira, Rosa A.; Pflamm-Altenburg, Jan; Kroupa, Pavel

    2013-01-01

    We analyze the relationship between maximum cluster mass and surface densities of total gas (Σ gas ), molecular gas (Σ H 2 ), neutral gas (Σ H I ), and star formation rate (Σ SFR ) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M 3rd ∝Σ H I 0.4±0.2 , whereM 3rd is the median of the five most massive clusters. There is no correlation withΣ gas ,Σ H2 , orΣ SFR . For clusters younger than 10 Myr, M 3rd ∝Σ H I 0.6±0.1 and M 3rd ∝Σ gas 0.5±0.2 ; there is no correlation with either Σ H 2 orΣ SFR . The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M 3rd ∝Σ gas 3.8±0.3 , M 3rd ∝Σ H 2 1.2±0.1 , and M 3rd ∝Σ SFR 0.9±0.1 . For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet traveled too far from their birth sites, the poor resolution of the radio data compared to the physical sizes of the clusters results in measuredΣ that are likely quite diluted compared to the actual densities relevant for the formation of the clusters.

  13. Study on closed pressure vessel test. Effect of heat rate, sample weight and vessel size on pressure rise due to thermal decomposition; Mippeigata atsuryoku yoki shiken ni kansuru kenkyu. Atsuryoku hassei kyodo ni oyobosu kanetsusokudo, shiryoryo oyobi youki saizu no eikyo

    Energy Technology Data Exchange (ETDEWEB)

    Aoki, Kenji.; Akutsu, Yoshiaki.; Arai, Mitsuru.; Tamura, Masamitsu. [The University of Tokyo, Tokyo (Japan). School of Engineering

    1999-02-28

    We have attempted to devise a new closed pressure vessel test apparatus in order to evaluate the violence of thermal decomposition of self-reactive materials and have examined some influencing factors, such as heat rate, sample weight, filling factor (sample weight/vessel size) and vessel size on Pmax (maximum pressure rise) and dP/dt (rate of pressure rise) due to their thermal decomposition. As a result, the following decreasing orders of Pmax and dP/dt were shown. Pmax: ADCA>BPZ>AIBN>TCP dP/dt: AIBN>BPZ>ADCA>TCP Moreover, Pmax was not almost influenced by heat rate, while dP/dt increased with an increase in heat rate in the case of BPZ. Pmax and dP/dt increased with an increase in sample weight and the degree of increase depended on the kinds of materials. In addition, it was shown that Pmax and dP/dt increased with an increase in vessel size at a constant filling factor. (author)

  14. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    Science.gov (United States)

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Impact of marine reserve on maximum sustainable yield in a traditional prey-predator system

    Science.gov (United States)

    Paul, Prosenjit; Kar, T. K.; Ghorai, Abhijit

    2018-01-01

    Multispecies fisheries management requires managers to consider the impact of fishing activities on several species as fishing impacts both targeted and non-targeted species directly or indirectly in several ways. The intended goal of traditional fisheries management is to achieve maximum sustainable yield (MSY) from the targeted species, which on many occasions affect the targeted species as well as the entire ecosystem. Marine reserves are often acclaimed as the marine ecosystem management tool. Few attempts have been made to generalize the ecological effects of marine reserve on MSY policy. We examine here how MSY and population level in a prey-predator system are affected by the low, medium and high reserve size under different possible scenarios. Our simulation works shows that low reserve area, the value of MSY for prey exploitation is maximum when both prey and predator species have fast movement rate. For medium reserve size, our analysis revealed that the maximum value of MSY for prey exploitation is obtained when prey population has fast movement rate and predator population has slow movement rate. For high reserve area, the maximum value of MSY for prey's exploitation is very low compared to the maximum value of MSY for prey's exploitation in case of low and medium reserve. On the other hand, for low and medium reserve area, MSY for predator exploitation is maximum when both the species have fast movement rate.

  16. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  17. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  18. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  19. Diversity Dynamics in Nymphalidae Butterflies: Effect of Phylogenetic Uncertainty on Diversification Rate Shift Estimates

    Science.gov (United States)

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910

  20. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    Directory of Open Access Journals (Sweden)

    Carlos Peña

    Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  1. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    Science.gov (United States)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  2. The role of abnormal fetal heart rate in scheduling chorionic villus sampling.

    Science.gov (United States)

    Yagel, S; Anteby, E; Ron, M; Hochner-Celnikier, D; Achiron, R

    1992-09-01

    To assess the value of fetal heart rate (FHR) measurements in predicting spontaneous fetal loss in pregnancies scheduled for chorionic villus sampling (CVS). A prospective descriptive study. Two hospital departments of obstetrics and gynaecology in Israel. 114 women between 9 and 11 weeks gestation scheduled for chorionic villus sampling (CVS). Fetal heart rate was measured by transvaginal Doppler ultrasound and compared with a monogram established from 75 fetuses. Whenever a normal FHR was recorded, CVS was performed immediately. 106 women had a normal FHR and underwent CVS; two of these pregnancies ended in miscarriage. In five pregnancies no fetal heart beats could be identified and fetal death was diagnosed. In three pregnancies an abnormal FHR was recorded and CVS was postponed; all three pregnancies ended in miscarriage within 2 weeks. Determination of FHR correlated with crown-rump length could be useful in predicting spontaneous miscarriage before performing any invasive procedure late in the first trimester.

  3. Passive Acoustic Source Localization at a Low Sampling Rate Based on a Five-Element Cross Microphone Array

    Directory of Open Access Journals (Sweden)

    Yue Kan

    2015-06-01

    Full Text Available Accurate acoustic source localization at a low sampling rate (less than 10 kHz is still a challenging problem for small portable systems, especially for a multitasking micro-embedded system. A modification of the generalized cross-correlation (GCC method with the up-sampling (US theory is proposed and defined as the US-GCC method, which can improve the accuracy of the time delay of arrival (TDOA and source location at a low sampling rate. In this work, through the US operation, an input signal with a certain sampling rate can be converted into another signal with a higher frequency. Furthermore, the optimal interpolation factor for the US operation is derived according to localization computation time and the standard deviation (SD of target location estimations. On the one hand, simulation results show that absolute errors of the source locations based on the US-GCC method with an interpolation factor of 15 are approximately from 1/15- to 1/12-times those based on the GCC method, when the initial same sampling rates of both methods are 8 kHz. On the other hand, a simple and small portable passive acoustic source localization platform composed of a five-element cross microphone array has been designed and set up in this paper. The experiments on the established platform, which accurately locates a three-dimensional (3D near-field target at a low sampling rate demonstrate that the proposed method is workable.

  4. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  5. Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.

    Science.gov (United States)

    Habershon, Scott

    2016-04-12

    In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.

  6. Maximum Smoke Temperature in Non-Smoke Model Evacuation Region for Semi-Transverse Tunnel Fire

    OpenAIRE

    B. Lou; Y. Qiu; X. Long

    2017-01-01

    Smoke temperature distribution in non-smoke evacuation under different mechanical smoke exhaust rates of semi-transverse tunnel fire were studied by FDS numerical simulation in this paper. The effect of fire heat release rate (10MW 20MW and 30MW) and exhaust rate (from 0 to 160m3/s) on the maximum smoke temperature in non-smoke evacuation region was discussed. Results show that the maximum smoke temperature in non-smoke evacuation region decreased with smoke exhaust rate. Plug-holing was obse...

  7. Forward flux sampling calculation of homogeneous nucleation rates from aqueous NaCl solutions.

    Science.gov (United States)

    Jiang, Hao; Haji-Akbari, Amir; Debenedetti, Pablo G; Panagiotopoulos, Athanassios Z

    2018-01-28

    We used molecular dynamics simulations and the path sampling technique known as forward flux sampling to study homogeneous nucleation of NaCl crystals from supersaturated aqueous solutions at 298 K and 1 bar. Nucleation rates were obtained for a range of salt concentrations for the Joung-Cheatham NaCl force field combined with the Extended Simple Point Charge (SPC/E) water model. The calculated nucleation rates are significantly lower than the available experimental measurements. The estimates for the nucleation rates in this work do not rely on classical nucleation theory, but the pathways observed in the simulations suggest that the nucleation process is better described by classical nucleation theory than an alternative interpretation based on Ostwald's step rule, in contrast to some prior simulations of related models. In addition to the size of NaCl nucleus, we find that the crystallinity of a nascent cluster plays an important role in the nucleation process. Nuclei with high crystallinity were found to have higher growth probability and longer lifetimes, possibly because they are less exposed to hydration water.

  8. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  9. Estimating the dim light melatonin onset of adolescents within a 6-h sampling window: the impact of sampling rate and threshold method.

    Science.gov (United States)

    Crowley, Stephanie J; Suh, Christina; Molina, Thomas A; Fogg, Louis F; Sharkey, Katherine M; Carskadon, Mary A

    2016-04-01

    Circadian rhythm sleep-wake disorders (CRSWDs) often manifest during the adolescent years. Measurement of circadian phase such as the dim light melatonin onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. A total of 66 healthy adolescents (26 males) aged 14.8-17.8 years participated in a study; they were required to sleep on a fixed baseline schedule for a week before which they visited the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 min (13 samples) and the other from samples taken every 60 min (seven samples). Three standard thresholds (first three melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. An agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using Bland-Altman analysis; agreement between the sampling rate DLMOs was defined as ± 1 h. Within a 6-h sampling window, 60-min sampling provided DLMO estimates within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with CRSWDs. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Maximum Aerobic Capacity of Underground Coal Miners in India

    Directory of Open Access Journals (Sweden)

    Ratnadeep Saha

    2011-01-01

    Full Text Available Miners fitness test was assessed in terms of determination of maximum aerobic capacity by an indirect method following a standard step test protocol before going down to mine by taking into consideration of heart rates (Telemetric recording and oxygen consumption of the subjects (Oxylog-II during exercise at different working rates. Maximal heart rate was derived as 220−age. Coal miners reported a maximum aerobic capacity within a range of 35–38.3 mL/kg/min. It also revealed that oldest miners (50–59 yrs had a lowest maximal oxygen uptake (34.2±3.38 mL/kg/min compared to (42.4±2.03 mL/kg/min compared to (42.4±2.03 mL/kg/min the youngest group (20–29 yrs. It was found to be negatively correlated with age (r=−0.55 and −0.33 for younger and older groups respectively and directly associated with the body weight of the subjects (r=0.57 – 0.68, P≤0.001. Carriers showed maximum cardio respiratory capacity compared to other miners. Indian miners VO2max was found to be lower both compared to their abroad mining counterparts and various other non-mining occupational working groups in India.

  11. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  12. Study of radon exhalation and emanation rates from fly ash samples

    International Nuclear Information System (INIS)

    Raj Kumari; Jain, Ravinder; Kant, Krishan; Gupta, Nitin; Garg, Maneesha; Yadav, Mani Kant

    2013-01-01

    Fly ash, a by-product of burnt coal is technologically important material being used for manufacturing of bricks, sheets, cement, land filling etc. The increased interest in measuring radon exhalation and emanation rates in fly ash samples is due to its health hazards and environmental pollution and the same have been measured to assess the radiological impact of radon emanated from fly ash disposal sites. Samples of fly ash from different thermal power stations in northern India and National Council for Cement and Building Materials (NCB) were collected and analysed for the measurements. For the measurement, alpha sensitive LR-115 type II plastic track detectors were used. Gamma spectrometry and can technique was used for the measurements. The experimental data show that fly ash samples emanate radon in significant amount and this consequently, may result in increased radon levels in dwellings built by using fly ash bricks and excessive radiation exposure to workers residing in the surroundings of fly ash dumping sites. (author)

  13. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    Science.gov (United States)

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  14. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Estimated neutron-activation data for TFTR. Part II. Biological dose rate from sample-materials activation

    International Nuclear Information System (INIS)

    Ku, L.; Kolibal, J.G.

    1982-06-01

    The neutron induced material activation dose rate data are summarized for the TFTR operation. This report marks the completion of the second phase of the systematic study of the activation problem on the TFTR. The estimations of the neutron induced activation dose rates were made for spherical and slab objects, based on a point kernel method, for a wide range of materials. The dose rates as a function of cooling time for standard samples are presented for a number of typical neutron spectrum expected during TFTR DD and DT operations. The factors which account for the variations of the pulsing history, the characteristic size of the object and the distance of observation relative to the standard samples are also presented

  16. Comparison of dechlorination rates for field DNAPL vs synthetic samples: effect of sample matrix

    Science.gov (United States)

    O'Carroll, D. M.; Sakulchaicharoen, N.; Herrera, J. E.

    2015-12-01

    Nanometals have received significant attention in recent years due to their ability to rapidly destroy numerous priority source zone contaminants in controlled laboratory studies. This has led to great optimism surrounding nanometal particle injection for insitu remediation. Reported dechlorination rates vary widely among different investigators. These differences have been ascribed to differences in the iron types (granular, micro, or nano-sized iron), matrix solution chemistry and the morphology of the nZVI surface. Among these, the effects of solution chemistry on rates of reductive dechlorination of various chlorinated compounds have been investigated in several short-term laboratory studies. Variables investigated include the effect of anions or groundwater solutes such as SO4-2, Cl-, NO3-, pH, natural organic matters (NOM), surfactant, and humic acid on dechlorination reaction of various chlorinated compounds such as TCE, carbon tetrachloride (CT), and chloroform (CF). These studies have normally centered on the assessment of nZVI reactivity toward dechlorination of an isolated individual contaminant spiked into a ground water sample under ideal conditions, with limited work conducted using real field samples. In this work, the DNAPL used for the dechlorination study was obtained from a contaminatied site. This approach was selected to adequately simulate a condition where the nZVI suspension was in direct contact with DNAPL and to isolate the dechlorination activity shown by the nZVI from the groundwater matrix effects. An ideal system "synthetic DNAPL" composed of a mixture of chlorinated compounds mimicking the composition of the actual DNAPL was also dechlorinated to evaluate the DNAPL "matrix effect" on NZVI dechlorination activity. This approach allowed us to evaluate the effect of the presence of different types of organic compounds (volatile fatty acids and humic acids) found in the actual DNAPL on nZVI dechlorination activity. This presentation will

  17. TEM sample preparation by femtosecond laser machining and ion milling for high-rate TEM straining experiments

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Thomas; Grapes, Michael D. [Dept. of Materials Science and Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Zhang, Yong [Dept. of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Lorenzo, Nicholas; Ligda, Jonathan; Schuster, Brian [US Army Research Laboratory, Aberdeen Proving Ground, Aberdeen, MD 21005 (United States); Weihs, Timothy P. [Dept. of Materials Science and Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States)

    2017-04-15

    To model mechanical properties of metals at high strain rates, it is important to visualize and understand their deformation at the nanoscale. Unlike post mortem Transmission Electron Microscopy (TEM), which allows one to analyze defects within samples before or after deformation, in situ TEM is a powerful tool that enables imaging and recording of deformation and the associated defect motion during mechanical loading. Unfortunately, all current in situ TEM mechanical testing techniques are limited to quasi-static strain rates. In this context, we are developing a new test technique that utilizes a rapid straining stage and the Dynamic TEM (DTEM) at the Lawrence Livermore National Laboratory (LLNL). The new straining stage can load samples in tension at strain rates as high as 4×10{sup 3}/s using two piezoelectric actuators operating in bending while the DTEM at LLNL can image in movie mode with a time resolution as short as 70 ns. Given the piezoelectric actuators are limited in force, speed, and displacement, we have developed a method for fabricating TEM samples with small cross-sectional areas to increase the applied stresses and short gage lengths to raise the applied strain rates and to limit the areas of deformation. In this paper, we present our effort to fabricate such samples from bulk materials. The new sample preparation procedure combines femtosecond laser machining and ion milling to obtain 300 µm wide samples with control of both the size and location of the electron transparent area, as well as the gage cross-section and length. - Highlights: • Tensile straining TEM specimens made by femtosecond laser machining and ion milling. • Accurate positioning of the electron transparent area within a controlled gauge region. • Optimization of femtosecond laser and ion milling parameters. • Fast production of numerous samples with a highly repeatable geometry.

  18. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  19. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  20. ON THE MAXIMUM MASS OF STELLAR BLACK HOLES

    International Nuclear Information System (INIS)

    Belczynski, Krzysztof; Fryer, Chris L.; Bulik, Tomasz; Ruiter, Ashley; Valsecchi, Francesca; Vink, Jorick S.; Hurley, Jarrod R.

    2010-01-01

    We present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments. In particular, we calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates (e.g., Hurley et al. and Vink et al.). Our calculations show that the highest mass BHs observed in the Galaxy M bh ∼ 15 M sun in the high metallicity environment (Z = Z sun = 0.02) can be explained with stellar models and the wind mass loss rates adopted here. To reach this result we had to set luminous blue variable mass loss rates at the level of ∼10 -4 M sun yr -1 and to employ metallicity-dependent Wolf-Rayet winds. With such winds, calibrated on Galactic BH mass measurements, the maximum BH mass obtained for moderate metallicity (Z = 0.3 Z sun = 0.006) is M bh,max = 30 M sun . This is a rather striking finding as the mass of the most massive known stellar BH is M bh = 23-34 M sun and, in fact, it is located in a small star-forming galaxy with moderate metallicity. We find that in the very low (globular cluster-like) metallicity environment the maximum BH mass can be as high as M bh,max = 80 M sun (Z = 0.01 Z sun = 0.0002). It is interesting to note that X-ray luminosity from Eddington-limited accretion onto an 80 M sun BH is of the order of ∼10 40 erg s -1 and is comparable to luminosities of some known ultra-luminous X-ray sources. We emphasize that our results were obtained for single stars only and that binary interactions may alter these maximum BH masses (e.g., accretion from a close companion). This is strictly a proof-of-principle study which demonstrates that stellar models can naturally explain even the most massive known stellar BHs.

  1. Maximum a posteriori covariance estimation using a power inverse wishart prior

    DEFF Research Database (Denmark)

    Nielsen, Søren Feodor; Sporring, Jon

    2012-01-01

    The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...

  2. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    Science.gov (United States)

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Baseline heart rate, sensation seeking, and aggression in young adult women: a two-sample examination.

    Science.gov (United States)

    Wilson, Laura C; Scarpa, Angela

    2013-01-01

    Although substantial literature discusses sensation seeking as playing a role in the relationship between baseline heart rate and aggression, few published studies have tested the relationships among these variables. Furthermore, most prior studies have focused on risk factors of aggression in men and have largely ignored this issue in women. Two samples (n = 104; n = 99) of young adult women completed measures of resting heart rate, sensation seeking, and aggression. Across the two samples of females there was no evidence for the relationships of baseline heart rate with sensation seeking or with aggression that has been consistently shown in males. Boredom susceptibility and disinhibition subscales of sensation seeking were consistently significantly correlated with aggression. The lack of significance and the small effect sizes indicate that other mechanisms are also at work in affecting aggression in young adult women. Finally, it is important to consider the type of sensation seeking in relation to aggression, as only boredom susceptibility and disinhibition were consistently replicated across samples. © 2013 Wiley Periodicals, Inc.

  4. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  5. A tube seepage meter for in situ measurement of seepage rate and groundwater sampling

    Science.gov (United States)

    Solder, John; Gilmore, Troy E.; Genereux, David P.; Solomon, D. Kip

    2016-01-01

    We designed and evaluated a “tube seepage meter” for point measurements of vertical seepage rates (q), collecting groundwater samples, and estimating vertical hydraulic conductivity (K) in streambeds. Laboratory testing in artificial streambeds show that seepage rates from the tube seepage meter agreed well with expected values. Results of field testing of the tube seepage meter in a sandy-bottom stream with a mean seepage rate of about 0.5 m/day agreed well with Darcian estimates (vertical hydraulic conductivity times head gradient) when averaged over multiple measurements. The uncertainties in q and K were evaluated with a Monte Carlo method and are typically 20% and 60%, respectively, for field data, and depend on the magnitude of the hydraulic gradient and the uncertainty in head measurements. The primary advantages of the tube seepage meter are its small footprint, concurrent and colocated assessments of q and K, and that it can also be configured as a self-purging groundwater-sampling device.

  6. Optical ages indicate the southwestern margin of the Green Bay Lobe in Wisconsin, USA, was at its maximum extent until about 18,500 years ago

    Science.gov (United States)

    Attig, J.W.; Hanson, P.R.; Rawling, J.E.; Young, A.R.; Carson, E.C.

    2011-01-01

    Samples for optical dating were collected to estimate the time of sediment deposition in small ice-marginal lakes in the Baraboo Hills of Wisconsin. These lakes formed high in the Baraboo Hills when drainage was blocked by the Green Bay Lobe when it was at or very near its maximum extent. Therefore, these optical ages provide control for the timing of the thinning and recession of the Green Bay Lobe from its maximum position. Sediment that accumulated in four small ice-marginal lakes was sampled and dated. Difficulties with field sampling and estimating dose rates made the interpretation of optical ages derived from samples from two of the lake basins problematic. Samples from the other two lake basins-South Bluff and Feltz basins-responded well during laboratory analysis and showed reasonably good agreement between the multiple ages produced at each site. These ages averaged 18.2. ka (n= 6) and 18.6. ka (n= 6), respectively. The optical ages from these two lake basins where we could carefully select sediment samples provide firm evidence that the Green Bay Lobe stood at or very near its maximum extent until about 18.5. ka.The persistence of ice-marginal lakes in these basins high in the Baraboo Hills indicates that the ice of the Green Bay Lobe had not experienced significant thinning near its margin prior to about 18.5. ka. These ages are the first to directly constrain the timing of the maximum extent of the Green Bay Lobe and the onset of deglaciation in the area for which the Wisconsin Glaciation was named. ?? 2011 Elsevier B.V.

  7. Maximum Urine Flow Rate of Less than 15ml/Sec Increasing Risk of Urine Retention and Prostate Surgery among Patients with Alpha-1 Blockers: A 10-Year Follow Up Study.

    Directory of Open Access Journals (Sweden)

    Hsin-Ho Liu

    Full Text Available The aim of this study was to determine the subsequent risk of acute urine retention and prostate surgery in patients receiving alpha-1 blockers treatment and having a maximum urinary flow rate of less than 15ml/sec.We identified patients who were diagnosed with benign prostate hyperplasia (BPH and had a maximum uroflow rate of less than 15ml/sec between 1 January, 2002 to 31 December, 2011 from Taiwan's National Health Insurance Research Database into study group (n = 303. The control cohort included four BPH/LUTS patients without 5ARI used for each study group, randomly selected from the same dataset (n = 1,212. Each patient was monitored to identify those who subsequently developed prostate surgery and acute urine retention.Prostate surgery and acute urine retention are detected in 5.9% of control group and 8.3% of study group during 10-year follow up. Compared with the control group, there was increase in the risk of prostate surgery and acute urine retention in the study group (HR = 1.83, 95% CI: 1.16 to 2.91 after adjusting for age, comorbidities, geographic region and socioeconomic status.Maximum urine flow rate of less than 15ml/sec is a risk factor of urinary retention and subsequent prostate surgery in BPH patients receiving alpha-1 blocker therapy. This result can provide a reference for clinicians.

  8. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  9. ORIGINAL ARTICLES Surgical practice in a maximum security prison

    African Journals Online (AJOL)

    Prison Clinic, Mangaung Maximum Security Prison, Bloemfontein. F Kleinhans, BA (Cur) .... HIV positivity rate and the use of the rectum to store foreign objects. ... fruit in sunlight. Other positive health-promoting factors may also play a role,.

  10. Measurement of Radon Exhalation Rate in Sand Samples from Gopalpur and Rushikulya Beach Orissa, Eastern India

    Science.gov (United States)

    Mahur, Ajay Kumar; Sharma, Anil; Sonkawade, R. G.; Sengupta, D.; Sharma, A. C.; Prasad, Rajendra

    Natural radioactivity is wide spread in the earth's environment and exists in various geological formations like soils, rocks, water and sand etc. The measurement of activities of naturally occurring radionuclides 226Ra, 232Th and 40K is important for the estimation of radiation risk and has been the subject of interest of research scientists all over the world. Building construction materials and soil beneath the house are the main sources of radon inside the dwellings. Radon exhalation rate from building materials like, cement, sand and concrete etc. is a major source of radiation to the habitants. In the present studies radon exhalation rates in sand samples collected from Gopalpur and Rushikulya beach placer deposit in Orissa are measured by using "Sealed Can technique" with LR-115 type II nuclear track detectors. In Samples from Rushikulya beach show radon activities varying from 389 ± 24 to 997 ± 38 Bq m-3 with an average value of 549 ±28 Bq m-3. Surface exhalation rates in these samples are found to vary from 140 ± 9 to 359 ± 14 mBq m-2 h-1with an average value of 197 ±10 mBq m-2 h-1, whereas, mass exhalation rates vary from 5 ± 0.3 to 14 ± 0.5 mBq kg-1 h-1 with an average value of 8 ± 0.4 mBq kg-1 h-1. Samples from Gopalpur radon activities are found to vary from 371 ± 23 to 800 ± 34 Bq m-3 with an average value of 549 ± 28 Bq m-3. Surface exhalation rates in these samples are found to vary from 133 ± 8 to 288 ± 12 mBq m-2h-1 with an average value of 197 ± 10 mBq m-2 h-1, whereas, mass exhalation rates vary from 5 ± 0.3 to 11 ± 1 mBq kg-1 h-1 with an average value of 8 ± 0.4 mBq kg-1 h-1.

  11. Rating Movies and Rating the Raters Who Rate Them.

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth

    2009-11-01

    The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data.

  12. Accretion rate of extraterrestrial {sup 41}Ca in Antarctic snow samples

    Energy Technology Data Exchange (ETDEWEB)

    Gómez-Guzmán, J.M., E-mail: jose.gomez@ph.tum.de [Technische Universität München, Fakultät für Physik, James-Franck-Strasse 1, 85748 Garching (Germany); Bishop, S.; Faestermann, T.; Famulok, N.; Fimiani, L.; Hain, K.; Jahn, S.; Korschinek, G.; Ludwig, P. [Technische Universität München, Fakultät für Physik, James-Franck-Strasse 1, 85748 Garching (Germany); Rodrigues, D. [Laboratorio TANDAR, Comisión Nacional de Energía Atómica (Argentina)

    2015-10-15

    Interplanetary Dust Particles (IDPs) are small grains, generally less than a few hundred micrometers in size. Their main source is the Asteroid Belt, located at 3 AU from the Sun, between Mars and Jupiter. During their flight from the Asteroid Belt to the Earth they are irradiated by galactic and solar cosmic rays (GCR and SCR), thus radionuclides are formed, like {sup 41}Ca and {sup 53}Mn. Therefore, {sup 41}Ca (T{sub 1/2} = 1.03 × 10{sup 5} yr) can be used as a key tracer to determine the accretion rate of IDPs onto the Earth because there are no significant terrestrial sources for this radionuclide. The first step of this study consisted to calculate the production rate of {sup 41}Ca in IDPs accreted by the Earth during their travel from the Asteroid Belt. This production rate, used in accordance with the {sup 41}Ca/{sup 40}Ca ratios that will be measured in snow samples from the Antarctica will be used to calculate the amount of extraterrestrial material accreted by the Earth per year. There challenges for this project are, at first, the much longer time for the flight needed by the IDPs to travel from the Asteroid Belt to the Earth in comparison with the {sup 41}Ca half-life yields an early saturation for the {sup 41}Ca/{sup 40}Ca ratio, and second, the importance of selecting the correct sampling site to avoid a high influx of natural {sup 40}Ca, preventing dilution of the {sup 41}Ca/{sup 40}Ca ratio, the quantity measured by AMS.

  13. Evaluation of IOM personal sampler at different flow rates.

    Science.gov (United States)

    Zhou, Yue; Cheng, Yung-Sung

    2010-02-01

    The Institute of Occupational Medicine (IOM) personal sampler is usually operated at a flow rate of 2.0 L/min, the rate at which it was designed and calibrated, for sampling the inhalable mass fraction of airborne particles in occupational environments. In an environment of low aerosol concentrations only small amounts of material are collected, and that may not be sufficient for analysis. Recently, a new sampling pump with a flow rate up to 15 L/min became available for personal samplers, with the potential of operating at higher flow rates. The flow rate of a Leland Legacy sampling pump, which operates at high flow rates, was evaluated and calibrated, and its maximum flow was found to be 10.6 L/min. IOM samplers were placed on a mannequin, and sampling was conducted in a large aerosol wind tunnel at wind speeds of 0.56 and 2.22 m/s. Monodisperse aerosols of oleic acid tagged with sodium fluorescein in the size range of 2 to 100 microm were used in the test. The IOM samplers were operated at flow rates of 2.0 and 10.6 L/min. Results showed that the IOM samplers mounted in the front of the mannequin had a higher sampling efficiency than those mounted at the side and back, regardless of the wind speed and flow rate. For the wind speed of 0.56 m/s, the direction-averaged (the average value of all orientations facing the wind direction) sampling efficiency of the samplers operated at 2.0 L/min was slightly higher than that of 10.6 L/min. For the wind speed of 2.22 m/s, the sampling efficiencies at both flow rates were similar for particles < 60 microm. The results also show that the IOM's sampling efficiency at these two different flow rates follows the inhalable mass curve for particles in the size range of 2 to 20 microm. The test results indicate that the IOM sampler can be used at higher flow rates.

  14. Assessment of glomerular filtration rate measurement with plasma sampling: a technical review.

    Science.gov (United States)

    Murray, Anthony W; Barnfield, Mark C; Waller, Michael L; Telford, Tania; Peters, A Michael

    2013-06-01

    This article reviews available radionuclide-based techniques for glomerular filtration rate (GFR) measurement, focusing on clinical indications for GFR measurement, ideal GFR radiopharmaceutical tracer properties, and the 2 most common tracers in clinical use. Methods for full, 1-compartment, and single-sample renal clearance characterization are discussed. GFR normalization and the role of GFR measurement in chemotherapy dosing are also considered.

  15. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    Science.gov (United States)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  16. Extracting volatility signal using maximum a posteriori estimation

    Science.gov (United States)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  17. A bound for the convergence rate of parallel tempering for sampling restricted Boltzmann machines

    DEFF Research Database (Denmark)

    Fischer, Asja; Igel, Christian

    2015-01-01

    on sampling. Parallel tempering (PT), an MCMC method that maintains several replicas of the original chain at higher temperatures, has been successfully applied for RBM training. We present the first analysis of the convergence rate of PT for sampling from binary RBMs. The resulting bound on the rate...... of convergence of the PT Markov chain shows an exponential dependency on the size of one layer and the absolute values of the RBM parameters. It is minimized by a uniform spacing of the inverse temperatures, which is often used in practice. Similarly as in the derivation of bounds on the approximation error...... for contrastive divergence learning, our bound on the mixing time implies an upper bound on the error of the gradient approximation when the method is used for RBM training....

  18. Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates

    Science.gov (United States)

    Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.

    Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.

  19. Walking, body mass index, and self-rated health in a representative sample of Spanish adults

    Directory of Open Access Journals (Sweden)

    Vicente Romo-Perez

    2016-01-01

    Full Text Available Abstract Obesity and physical inactivity (PI are risk factors for chronic diseases and are associated with lifestyle and environmental factors. The study tested the association between PI, body mass index (BMI, and self-rated health in a representative sample of the Spanish adult population (N = 21,486. The sample included 41.5% men, with mean age 52.3 years (± 18.03, and age range 20-82 years. Prevalence of overweight/obesity was 34.2%/12.7% in women and 52.1%/12.7% in men (p < 0.001 for obesity in both sexes. 53% of women and 57.5% of men met recommended levels of physical activity by walking (≥ 150 minutes/week. According to logistic regression analysis, individuals that walked less had higher risk of overweight or obesity. Data from the population-based surveillance study support suggestions that regular walking by adults is associated with positive self-rated health and better BMI profile. Obesity and low/very low self-rated health have low prevalence rates to meet the recommendations.

  20. Suicidal Behaviors among Adolescents in Puerto Rico: Rates and Correlates in Clinical and Community Samples

    Science.gov (United States)

    Jones, Jennifer; Ramirez, Rafael Roberto; Davies, Mark; Canino, Glorisa; Goodwin, Renee D.

    2008-01-01

    This study examined rates and correlates of suicidal behavior among youth on the island of Puerto Rico. Data were drawn from two probability samples, one clinical (n = 736) and one community-based sample (n = 1,896), of youth ages 12 to 17. Consistent with previous studies in U.S. mainland adolescent populations, our results demonstrate that most…

  1. Ageing effects of polymers at very low dose-rates

    International Nuclear Information System (INIS)

    Chenion, J.; Armand, X.; Berthet, J.; Carlin, F.; Gaussens, G.; Le Meur, M.

    1987-10-01

    The equipment irradiation dose-rate into the containment is variable from 10 -6 to 10 -4 gray per second for the most exposed materials. During qualification, safety equipments are submitted in France to dose-rates around 0.28 gray per second. This study purpose is to now if a so large irradiation dose-rate increase is reasonable. Three elastomeric materials used in electrical cables, o'rings seals and connectors, are exposed to a very large dose-rates scale between 2.1.10 -4 and 1.4 gray per second, to 49 KGy dose. This work was carried out during 3.5 years. Oxygen consumption measurement of the air in contact with polymer materials, as mechanical properties measurement show that: - at very low dose-rate, oxygen consumption is maximum at the same time (1.4 year) for the three elastomeric samples. Also, mechanical properties simultaneously change with oxygen consumption. At very low dose-rate, for the low irradiation doses, oxygen consumption is at least 10 times more important that it is showed when irradiation is carried out with usual material qualification dose-rate. At very low dose-rate, oxygen consumption decreases when absorbed irradiation dose by samples increases. The polymer samples irradiation dose is not still sufficient (49 KGy) to certainly determine, for the three chosen polymer materials, the reasonable irradiation acceleration boundary during nuclear qualification tests [fr

  2. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  3. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  4. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  5. Microbiopsies versus Bergström needle for skeletal muscle sampling: impact on maximal mitochondrial respiration rate.

    Science.gov (United States)

    Isner-Horobeti, M E; Charton, A; Daussin, F; Geny, B; Dufour, S P; Richard, R

    2014-05-01

    Microbiopsies are increasingly used as an alternative to the standard Bergström technique for skeletal muscle sampling. The potential impact of these two different procedures on mitochondrial respiration rate is unknown. The objective of this work was to compare microbiopsies versus Bergström procedure on mitochondrial respiration in skeletal muscle. 52 vastus lateralis muscle samples were obtained from 13 anesthetized pigs, either with a Bergström [6 gauges (G)] needle or with microbiopsy needles (12, 14, 18G). Maximal mitochondrial respiration (V GM-ADP) was assessed using an oxygraphic method on permeabilized fibers. The weight of the muscle samples and V GM-ADP decreased with the increasing gauge of the needles. A positive nonlinear relationship was observed between the weight of the muscle sample and the level of maximal mitochondrial respiration (r = 0.99, p respiration (r = 0.99, p respiration compared to the standard Bergström needle.Therefore, the higher the gauge (i.e. the smaller the size) of the microbiopsy needle, the lower is the maximal rate of respiration. Microbiopsies of skeletal muscle underestimate the maximal mitochondrial respiration rate, and this finding needs to be highlighted for adequate interpretation and comparison with literature data.

  6. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  7. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  8. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  9. Measurement of 222Rn and 220Rn exhalation rate from soil samples of Kumaun Hills, India

    Science.gov (United States)

    Semwal, Poonam; Singh, Kuldeep; Agarwal, T. K.; Joshi, Manish; Pant, Preeti; Kandari, Tushar; Ramola, R. C.

    2018-03-01

    The source terms, i.e., exhalation and emanation from soil and building materials are the primary contributors to the radon (222Rn)/thoron (220Rn) concentration levels in the dwellings, while the ecological constraints like ventilation rate, temperature, pressure, humidity, etc., are the influencing factors. The present study is focused on Almora District of Kumaun, located in Himalayan belt of Uttarakhand, India. For the measurement of 222Rn and 220Rn exhalation rates, 24 soil samples were collected from different locations. Gamma radiation level was measured at each of these locations. Chamber technique associated with Smart Rn Duo portable monitor was employed for the estimation of 222Rn and 220Rn exhalation rates. Radionuclides (226Ra, 232Th and 40K) concentrations were also measured in soil samples using NaI(Tl) scintillation based gamma ray spectrometry. The mass exhalation rate for 222Rn was varying between 16 and 54 mBq/kg/h, while the 220Rn surface exhalation rate was in the range of 0.65-6.43 Bq/m2/s. Measured gamma dose rate for the same region varied from 0.10 to 0.31 µSv/h. Inter-correlation of exhalation rates and intra-correlation with background gamma levels were studied.

  10. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  11. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  12. Thermodynamic and structural models compared with the initial dissolution rates of SON glass samples

    International Nuclear Information System (INIS)

    Tovena, I.; Advocat, T.; Ghaleb, D.; Vernaz, E.

    1993-01-01

    The experimentally determined initial dissolution rate R 0 of nuclear glass was correlated with thermodynamic parameters and structural parameters. The initial corrosion rates of six ''R7T7'' glass samples measured at 100 deg C in a Soxhlet device were correlated with the glass free hydration energy and the glass formation enthalpy. These correlations were then tested with a group of 26 SON glasses selected for their wide diversity of compositions. The thermodynamic models provided a satisfactory approximation of the initial dissolution rate determined under Soxhlet conditions for SON glass samples that include up to 15 wt% of boron and some alumina. Conversely, these models are inaccurate if the boron concentration exceeds 15 wt% and the glass contains no alumina. Possible correlations between R 0 and structural parameters, such as the boron coordination number and the number of nonbridging oxygen atoms, were also investigated. The authors show that R 0 varies inversely with the number of 4-coordinate boron atoms; conversely, the results do not substantiate published reports of a correlation between R 0 and the number of nonbridging oxygen atoms. (authors). 13 refs., 2 figs., 4 tabs

  13. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Science.gov (United States)

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  14. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  15. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    Energy Technology Data Exchange (ETDEWEB)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2011-07-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  16. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    International Nuclear Information System (INIS)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G.; Silva, Ademir X.

    2011-01-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  17. Method of measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1980-01-01

    A novel liquid scintillation counting method of measuring the disintegration rate of a beta-emitting radionuclide is described which involves counting the sample at at least two different quench levels. (UK)

  18. Self-rated health in relation to rape and mental health disorders in a national sample of women.

    Science.gov (United States)

    Amstadter, Ananda B; McCauley, Jenna L; Ruggiero, Kenneth J; Resnick, Heidi S; Kilpatrick, Dean G

    2011-04-01

    Overall health status is associated with long-term physical morbidity and mortality. Existing research on the correlates of mental health effects of rape suggests that rape victims are at higher risk for poor overall health status. Little is known, however, about how different rape tactics may relate to health status in rape victims. Our aim was to examine prevalence and correlates of self-rated health in a community sample of women, with particular emphasis on lifetime rape history (distinguishing between rape tactics), psychopathology, and substance use outcomes. A nationally representative sample of 3,001 U.S. women (age range: 18-86 years) residing in households with a telephone participated in a structured telephone interview. Poor self-rated health was endorsed by 11.4% of the sample. Final multivariable models showed that poor self-rated health was associated with older age (pdepressive episode (MDE; p=.01), and history of forcible rape (p=.01). Self-rated health was associated with three potentially modifiable variables (forcible rape, PTSD, and MDE). Therefore, trauma-focused interventions for rape victims should include collaboration on treatment or prevention modules that specifically address both mental and physical health. © 2011 American Orthopsychiatric Association.

  19. Maximum rates of climate change are systematically underestimated in the geological record.

    Science.gov (United States)

    Kemp, David B; Eichenseer, Kilian; Kiessling, Wolfgang

    2015-11-10

    Recently observed rates of environmental change are typically much higher than those inferred for the geological past. At the same time, the magnitudes of ancient changes were often substantially greater than those established in recent history. The most pertinent disparity, however, between recent and geological rates is the timespan over which the rates are measured, which typically differ by several orders of magnitude. Here we show that rates of marked temperature changes inferred from proxy data in Earth history scale with measurement timespan as an approximate power law across nearly six orders of magnitude (10(2) to >10(7) years). This scaling reveals how climate signals measured in the geological record alias transient variability, even during the most pronounced climatic perturbations of the Phanerozoic. Our findings indicate that the true attainable pace of climate change on timescales of greatest societal relevance is underestimated in geological archives.

  20. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  1. The potential effect of differential ambient and deployment chamber temperatures on PRC derived sampling rates with polyurethane foam (PUF) passive air samplers

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, Karen, E-mail: k.kennedy@uq.edu.a [University of Queensland, EnTox (National Research Centre for Environmental Toxicology), 39 Kessels Rd., Coopers Plains QLD 4108 (Australia); Hawker, Darryl W. [Griffith University, School of Environment, Nathan QLD 4111 (Australia); Bartkow, Michael E. [University of Queensland, EnTox (National Research Centre for Environmental Toxicology), 39 Kessels Rd., Coopers Plains QLD 4108 (Australia); Carter, Steve [Queensland Health Forensic and Scientific Services, Coopers Plains QLD 4108 (Australia); Ishikawa, Yukari; Mueller, Jochen F. [University of Queensland, EnTox (National Research Centre for Environmental Toxicology), 39 Kessels Rd., Coopers Plains QLD 4108 (Australia)

    2010-01-15

    Performance reference compound (PRC) derived sampling rates were determined for polyurethane foam (PUF) passive air samplers in both sub-tropical and temperate locations across Australia. These estimates were on average a factor of 2.7 times higher in summer than winter. The known effects of wind speed and temperature on mass transfer coefficients could not account for this observation. Sampling rates are often derived using ambient temperatures, not the actual temperatures within deployment chambers. If deployment chamber temperatures are in fact higher than ambient temperatures, estimated sampler-air partition coefficients would be greater than actual partition coefficients resulting in an overestimation of PRC derived sampling rates. Sampling rates determined under measured ambient temperatures and estimated deployment chamber temperatures in summer ranged from 7.1 to 10 m{sup 3} day{sup -1} and 2.2-6.8 m{sup 3} day{sup -1} respectively. These results suggest that potential differences between ambient and deployment chamber temperatures should be considered when deriving PRC-based sampling rates. - Internal deployment chamber temperatures rather than ambient temperatures may be required to accurately estimate PRC-based sampling rates.

  2. Experimental verification of air flow rate measurement for representative isokinetic air sampling in ventilation stacks

    International Nuclear Information System (INIS)

    Okruhlica, P.; Mrtvy, M.; Kopecky, Z.

    2009-01-01

    Nuclear facilities are obliged to monitor their discharge's influence on environment. Main monitored factions in NPP's ventilation stacks are usually noble gasses, particulates and iodine. These factions are monitored in air sampled from ventilation stack by means of sampling rosette and bypass followed with on-line measuring monitors and balance sampling devices with laboratory evaluations. Correct air flow rate measurement and representative iso-kinetic air sampling system is essential for physical correct and metrological accurate evaluation of discharge influence on environment. Pairs of measuring sensors (Anemometer, pressure gauge, thermometer and humidity meter) are symmetrically placed in horizontal projection of stack on positions based on measured air flow velocity distribution characteristic, Analogically diameter of sampling rosette nozzles and their placement in the middle of 6 - 7 annuluses are calculated for assurance of representative iso-kinetic sampling. (authors)

  3. Experimental verification of air flow rate measurement for representative isokinetic air sampling in ventilation stacks

    International Nuclear Information System (INIS)

    Okruhlica, P.; Mrtvy, M.; Kopecky, Z.

    2008-01-01

    Nuclear facilities are obliged to monitor their discharge's influence on environment. Main monitored factions in NPP's ventilation stacks are usually noble gasses, particulates and iodine. These factions are monitored in air sampled from ventilation stack by means of sampling rosette and bypass followed with on-line measuring monitors and balance sampling devices with laboratory evaluations. Correct air flow rate measurement and representative iso-kinetic air sampling system is essential for physical correct and metrological accurate evaluation of discharge influence on environment. Pairs of measuring sensors (Anemometer, pressure gauge, thermometer and humidity meter) are symmetrically placed in horizontal projection of stack on positions based on measured air flow velocity distribution characteristic, Analogically diameter of sampling rosette nozzles and their placement in the middle of 6- 7 annuluses are calculated for assurance of representative iso-kinetic sampling. (authors)

  4. Tip Speed Ratio Based Maximum Power Tracking Control of Variable Speed Wind Turbines; A Comprehensive Design

    Directory of Open Access Journals (Sweden)

    Murat Karabacak

    2017-08-01

    Full Text Available The most primitive control method of wind turbines used to generate electric energy from wind is the fixed speed control method. With this method, it is not possible that turbine input power is transferred to grid at maximum rate. For this reason, Maximum Power Tracking (MPT schemes are proposed. In order to implement MPT, the propeller has to rotate at a different speed for every different wind speed. This situation has led MPT based systems to be called Variable Speed Wind Turbine (VSWT systems. In VSWT systems, turbine input power can be transferred to grid at rates close to maximum power. When MPT based control of VSWT systems is the case, two important processes come into prominence. These are instantaneously determination and tracking of MPT point. In this study, using a Maximum Power Point Tracking (MPPT method based on tip speed ratio, power available in wind is transferred into grid over a back to back converter at maximum rate via a VSWT system with permanent magnet synchronous generator (PMSG. Besides a physical wind turbine simulator is modelled and simulated. Results show that a time varying MPPT point is tracked with a high performance.

  5. Radon exhalation rates from slate stone samples in Aravali Range in Haryana

    International Nuclear Information System (INIS)

    Upadhyay, S.B.; Kant, K.; Chakarvarti, S.K.

    2012-01-01

    The slate stone tiles are very popular in covering the walls of the rooms. Radon is released into ambient air from slate stones due to ubiquitous uranium and radium in them, thus increasing the airborne radon concentration. The radioactivity in slates stones is related to radioactivity in the rocks from which the slate stone tiles are formed. In the present investigation, the radon emanated from slate stone samples collected from different slate mines in Aravali range of hills in the Haryana state of Northern India has been estimated. For the measurement of radon concentration emanated from these samples, alpha-sensitive LR-115 type II plastic track detectors have been used. The alpha particles emitted from the radon form tracks in these detectors. After chemical etching the track density of registered tracks is used to calculate radon concentration and exhalation rates of radon using required formulae. The measurements indicate normal to some higher levels of radon concentration emanated from the slat stone samples collected from Aravali range of hills in north India. The results will be discussed in full paper. (author)

  6. Effect of Impact Angle on the Erosion Rate of Coherent Granular Soil, with a Chernozemic Soil as an Example

    Science.gov (United States)

    Larionov, G. A.; Bushueva, O. G.; Gorobets, A. V.; Dobrovol'skaya, N. G.; Kiryukhina, Z. P.; Krasnov, S. F.; Kobylchenko Kuksina, L. V.; Litvin, L. F.; Sudnitsyn, I. I.

    2018-02-01

    It has been shown in experiments in a hydraulic flume with a knee-shaped bend that the rate of soil erosion more than doubles at the flow impact angles to the channel side from 0° to 50°. At higher channel bends, the experiment could not be performed because of backwater. Results of erosion by water stream approaching the sample surface at angles between 2° and 90° are reported. It has been found that the maximum erosion rate is observed at flow impact angles of about 45°, and the minimum rate at 90°. The minimum soil erosion rate is five times lower than the maximum erosion rate. This is due to the difference in the rate of free water penetration into the upper soil layer, and the impact of the hydrodynamic pressure, which is maximum at the impact angle of 90°. The penetration of water into the interaggregate space results in the breaking of bonds between aggregates, which is the main condition for the capture of particles by the flow.

  7. Is there a maximum star formation rate in high-redshift galaxies? , , ,

    International Nuclear Information System (INIS)

    Barger, A. J.; Cowie, L. L.; Chen, C.-C.; Casey, C. M.; Lee, N.; Sanders, D. B.; Williams, J. P.; Owen, F. N.; Wang, W.-H.

    2014-01-01

    We use the James Clerk Maxwell Telescope's SCUBA-2 camera to image a 400 arcmin 2 area surrounding the GOODS-N field. The 850 μm rms noise ranges from a value of 0.49 mJy in the central region to 3.5 mJy at the outside edge. From these data, we construct an 850 μm source catalog to 2 mJy containing 49 sources detected above the 4σ level. We use an ultradeep (11.5 μJy at 5σ) 1.4 GHz image obtained with the Karl G. Jansky Very Large Array together with observations made with the Submillimeter Array to identify counterparts to the submillimeter galaxies. For most cases of multiple radio counterparts, we can identify the correct counterpart from new and existing Submillimeter Array data. We have spectroscopic redshifts for 62% of the radio sources in the 9' radius highest sensitivity region (556/894) and 67% of the radio sources in the GOODS-N region (367/543). We supplement these with a modest number of additional photometric redshifts in the GOODS-N region (30). We measure millimetric redshifts from the radio to submillimeter flux ratios for the unidentified submillimeter sample, assuming an Arp 220 spectral energy distribution. We find a radio-flux-dependent K – z relation for the radio sources, which we use to estimate redshifts for the remaining radio sources. We determine the star formation rates (SFRs) of the submillimeter sources based on their radio powers and their submillimeter fluxes and find that they agree well. The radio data are deep enough to detect star-forming galaxies with SFRs >2000 M ☉ yr –1 to z ∼ 6. We find galaxies with SFRs up to ∼6000 M ☉ yr –1 over the redshift range z = 1.5-6, but we see evidence for a turn-down in the SFR distribution function above 2000 M ☉ yr –1 .

  8. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  9. Estimating pesticide sampling rates by the polar organic chemical integrative sampler (POCIS) in the presence of natural organic matter and varying hydrodynamic conditions

    International Nuclear Information System (INIS)

    Charlestra, Lucner; Amirbahman, Aria; Courtemanch, David L.; Alvarez, David A.; Patterson, Howard

    2012-01-01

    The polar organic chemical integrative sampler (POCIS) was calibrated to monitor pesticides in water under controlled laboratory conditions. The effect of natural organic matter (NOM) on the sampling rates (R s ) was evaluated in microcosms containing −1 of total organic carbon (TOC). The effect of hydrodynamics was studied by comparing R s values measured in stirred (SBE) and quiescent (QBE) batch experiments and a flow-through system (FTS). The level of NOM in the water used in these experiments had no effect on the magnitude of the pesticide sampling rates (p > 0.05). However, flow velocity and turbulence significantly increased the sampling rates of the pesticides in the FTS and SBE compared to the QBE (p < 0.001). The calibration data generated can be used to derive pesticide concentrations in water from POCIS deployed in stagnant and turbulent environmental systems without correction for NOM. - Highlights: ► We assessed the effect of TOC and stirring on pesticide sampling rates by POCIS. ► Total organic carbon (TOC) had no effect on the sampling rates. ► Water flow and stirring significantly increased the magnitude of the sampling rates. ► The sampling rates generated are directly applicable to field conditions. - This study provides POCIS sampling rates data that can be used to estimate freely dissolved concentrations of toxic pesticides in natural waters.

  10. Future changes over the Himalayas: Maximum and minimum temperature

    Science.gov (United States)

    Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.

    2018-03-01

    An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with

  11. Maximum Entropy and Theory Construction: A Reply to Favretti

    Directory of Open Access Journals (Sweden)

    John Harte

    2018-04-01

    Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.

  12. Microcrack Evolution and Associated Deformation and Strength Properties of Sandstone Samples Subjected to Various Strain Rates

    Directory of Open Access Journals (Sweden)

    Chong-Feng Chen

    2018-05-01

    Full Text Available The evolution of micro-cracks in rocks under different strain rates is of great importance for a better understanding of the mechanical properties of rocks under complex stress states. In the present study, a series of tests were carried out under various strain rates, ranging from creep tests to intermediate strain rate tests, so as to observe the evolution of micro-cracks in rock and to investigate the influence of the strain rate on the deformation and strength properties of rocks. Thin sections from rock samples at pre- and post-failure were compared and analyzed at the microscale using an optical microscope. The results demonstrate that the main crack propagation in the rock is intergranular at a creep strain rate and transgranular at a higher strain rate. However, intergranular cracks appear mainly around the quartz and most of the punctured grains are quartz. Furthermore, the intergranular and transgranular cracks exhibit large differences in the different loading directions. In addition, uniaxial compressive tests were conducted on the unbroken rock samples in the creep tests. A comparison of the stress–strain curves of the creep tests and the intermediate strain rate tests indicate that Young’s modulus and the peak strength increase with the strain rate. In addition, more deformation energy is released by the generation of the transgranular cracks than the generation of the intergranular cracks. This study illustrates that the conspicuous crack evolution under different strain rates helps to understand the crack development on a microscale, and explains the relationship between the micro- and macro-behaviors of rock before the collapse under different strain rates.

  13. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  14. 75 FR 52947 - Maximum Per Diem Rates for the Continental United States (CONUS)

    Science.gov (United States)

    2010-08-30

    ... per diem rate setting process enhances the government's ability to obtain policy-compliant lodging where it is needed. In conjunction with the annual lodging study, GSA identified five new non-standard... diem localities and updates the standard CONUS rate. The CONUS per diem rates prescribed in Bulletin 11...

  15. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  16. Phase identification of quasi-periodic flow measured by particle image velocimetry with a low sampling rate

    International Nuclear Information System (INIS)

    Pan, Chong; Wang, Hongping; Wang, Jinjun

    2013-01-01

    This work mainly deals with the proper orthogonal decomposition (POD) time coefficient method used for extracting phase information from quasi-periodic flow. The mathematical equivalence between this method and the traditional cross-correlation method is firstly proved. A two-dimensional circular cylinder wake flow measured by time-resolved particle image velocimetry within a range of Reynolds numbers is then used to evaluate the reliability of this method. The effect of both the sampling rate and Reynolds number on the identification accuracy is finally discussed. It is found that the POD time coefficient method provides a convenient alternative for phase identification, whose feasibility in low-sampling-rate measurement has additional advantages for experimentalists. (paper)

  17. Output Information Based Fault-Tolerant Iterative Learning Control for Dual-Rate Sampling Process with Disturbances and Output Delay

    Directory of Open Access Journals (Sweden)

    Hongfeng Tao

    2018-01-01

    Full Text Available For a class of single-input single-output (SISO dual-rate sampling processes with disturbances and output delay, this paper presents a robust fault-tolerant iterative learning control algorithm based on output information. Firstly, the dual-rate sampling process with output delay is transformed into discrete system in state-space model form with slow sampling rate without time delay by using lifting technology; then output information based fault-tolerant iterative learning control scheme is designed and the control process is turned into an equivalent two-dimensional (2D repetitive process. Moreover, based on the repetitive process stability theory, the sufficient conditions for the stability of system and the design method of robust controller are given in terms of linear matrix inequalities (LMIs technique. Finally, the flow control simulations of two flow tanks in series demonstrate the feasibility and effectiveness of the proposed method.

  18. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  19. Success and failure rates of tumor genotyping techniques in routine pathological samples with non-small-cell lung cancer.

    Science.gov (United States)

    Vanderlaan, Paul A; Yamaguchi, Norihiro; Folch, Erik; Boucher, David H; Kent, Michael S; Gangadharan, Sidharta P; Majid, Adnan; Goldstein, Michael A; Huberman, Mark S; Kocher, Olivier N; Costa, Daniel B

    2014-04-01

    Identification of some somatic molecular alterations in non-small-cell lung cancer (NSCLC) has become evidence-based practice. The success and failure rate of using commercially available tumor genotyping techniques in routine day-to-day NSCLC pathology samples is not well described. We sought to evaluate the success and failure rate of EGFR mutation, KRAS mutation, and ALK FISH in a cohort of lung cancers subjected to routine clinical tumor genotype. Clinicopathologic data, tumor genotype success and failure rates were retrospectively compiled and analyzed from 381 patient-tumor samples. From these 381 patients with lung cancer, the mean age was 65 years, 61.2% were women, 75.9% were white, 27.8% were never smokers, 73.8% had advanced NSCLC and 86.1% had adenocarcinoma histology. The tumor tissue was obtained from surgical specimens in 48.8%, core needle biopsies in 17.9%, and as cell blocks from aspirates or fluid in 33.3% of cases. Anatomic sites for tissue collection included lung (49.3%), lymph nodes (22.3%), pleura (11.8%), bone (6.0%), brain (6.0%), among others. The overall success rate for EGFR mutation analysis was 94.2%, for KRAS mutation 91.6% and for ALK FISH 91.6%. The highest failure rates were observed when the tissue was obtained from image-guided percutaneous transthoracic core-needle biopsies (31.8%, 27.3%, and 35.3% for EGFR, KRAS, and ALK tests, respectively) and bone specimens (23.1%, 15.4%, and 23.1%, respectively). In specimens obtained from bone, the failure rates were significantly higher for biopsies than resection specimens (40% vs. 0%, p=0.024 for EGFR) and for decalcified compared to non-decalcified samples (60% vs. 5.5%, p=0.021 for EGFR). Tumor genotype techniques are feasible in most samples, outside small image-guided percutaneous transthoracic core-needle biopsies and bone samples from core biopsies with decalcification, and therefore expansion of routine tumor genotype into the care of patients with NSCLC may not require special

  20. Production of aerosols by optical catapulting: Imaging, performance parameters and laser-induced plasma sampling rate

    International Nuclear Information System (INIS)

    Abdelhamid, M.; Fortes, F.J.; Fernández-Bravo, A.; Harith, M.A.; Laserna, J.J.

    2013-01-01

    Optical catapulting (OC) is a sampling and manipulation method that has been extensively studied in applications ranging from single cells in heterogeneous tissue samples to analysis of explosive residues in human fingerprints. Specifically, analysis of the catapulted material by means of laser-induced breakdown spectroscopy (LIBS) offers a promising approach for the inspection of solid particulate matter. In this work, we focus our attention in the experimental parameters to be optimized for a proper aerosol generation while increasing the particle density in the focal region sampled by LIBS. For this purpose we use shadowgraphy visualization as a diagnostic tool. Shadowgraphic images were acquired for studying the evolution and dynamics of solid aerosols produced by OC. Aluminum silicate particles (0.2–8 μm) were ejected from the substrate using a Q-switched Nd:YAG laser at 1064 nm, while time-resolved images recorded the propagation of the generated aerosol. For LIBS analysis and shadowgraphy visualization, a Q-switched Nd:YAG laser at 1064 nm and 532 nm was employed, respectively. Several parameters such as the time delay between pulses and the effect of laser fluence on the aerosol production have been also investigated. After optimization, the particle density in the sampling focal volume increases while improving the aerosol sampling rate till ca. 90%. - Highlights: • Aerosol generation by optical catapulting has been successfully optimized. • We study the evolution and dynamics of solid aerosols produced by OC. • We use shadowgraphy visualization as a diagnostic tool. • Effects of temporal conditions and laser fluence on the elevation of the aerosol cloud have been investigated. • The observed LIBS sampling rate increased from 50% reported before to approximately 90%

  1. Compton suppression gamma-counting: The effect of count rate

    Science.gov (United States)

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  2. On the role of the gas environment, electron-dose-rate, and sample on the image resolution in transmission electron microscopy

    DEFF Research Database (Denmark)

    Ek, Martin; Jespersen, Sebastian Pirel Fredsgaard; Damsgaard, Christian Danvad

    2016-01-01

    on the electron-dose-rate. In this article, we demonstrate that both the total and areal electron-dose-rates work as descriptors for the dose-rate-dependent resolution and are related through the illumination area. Furthermore, the resolution degradation was observed to occur gradually over time after......The introduction of gaseous atmospheres in transmission electron microscopy offers the possibility of studying materials in situ under chemically relevant environments. The presence of a gas environment can degrade the resolution. Surprisingly, this phenomenon has been shown to depend...... initializing the illumination of the sample and gas by the electron beam. The resolution was also observed to be sensitive to the electrical conductivity of the sample. These observations can be explained by a charge buildup over the electron-illuminated sample area, caused by the beam–gas–sample interaction...

  3. STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.

  4. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  5. Type Ibn Supernovae Show Photometric Homogeneity and Spectral Diversity at Maximum Light

    Energy Technology Data Exchange (ETDEWEB)

    Hosseinzadeh, Griffin; Arcavi, Iair; McCully, Curtis; Howell, D. Andrew [Las Cumbres Observatory, 6740 Cortona Dr Ste 102, Goleta, CA 93117-5575 (United States); Valenti, Stefano [Department of Physics, University of California, 1 Shields Ave, Davis, CA 95616-5270 (United States); Johansson, Joel [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Sollerman, Jesper; Fremling, Christoffer; Karamehmetoglu, Emir [Oskar Klein Centre, Department of Astronomy, Stockholm University, Albanova University Centre, SE-106 91 Stockholm (Sweden); Pastorello, Andrea; Benetti, Stefano; Elias-Rosa, Nancy [INAF-Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova (Italy); Cao, Yi; Duggan, Gina; Horesh, Assaf [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Mail Code 249-17, Pasadena, CA 91125 (United States); Cenko, S. Bradley [Astrophysics Science Division, NASA Goddard Space Flight Center, Mail Code 661, Greenbelt, MD 20771 (United States); Clubb, Kelsey I.; Filippenko, Alexei V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Corsi, Alessandra [Department of Physics, Texas Tech University, Box 41051, Lubbock, TX 79409-1051 (United States); Fox, Ori D., E-mail: griffin@lco.global [Space Telescope Science Institute, 3700 San Martin Dr, Baltimore, MD 21218 (United States); and others

    2017-02-20

    Type Ibn supernovae (SNe) are a small yet intriguing class of explosions whose spectra are characterized by low-velocity helium emission lines with little to no evidence for hydrogen. The prevailing theory has been that these are the core-collapse explosions of very massive stars embedded in helium-rich circumstellar material (CSM). We report optical observations of six new SNe Ibn: PTF11rfh, PTF12ldy, iPTF14aki, iPTF15ul, SN 2015G, and iPTF15akq. This brings the sample size of such objects in the literature to 22. We also report new data, including a near-infrared spectrum, on the Type Ibn SN 2015U. In order to characterize the class as a whole, we analyze the photometric and spectroscopic properties of the full Type Ibn sample. We find that, despite the expectation that CSM interaction would generate a heterogeneous set of light curves, as seen in SNe IIn, most Type Ibn light curves are quite similar in shape, declining at rates around 0.1 mag day{sup −1} during the first month after maximum light, with a few significant exceptions. Early spectra of SNe Ibn come in at least two varieties, one that shows narrow P Cygni lines and another dominated by broader emission lines, both around maximum light, which may be an indication of differences in the state of the progenitor system at the time of explosion. Alternatively, the spectral diversity could arise from viewing-angle effects or merely from a lack of early spectroscopic coverage. Together, the relative light curve homogeneity and narrow spectral features suggest that the CSM consists of a spatially confined shell of helium surrounded by a less dense extended wind.

  6. Measurement of Passive Uptake Rates for Volatile Organic Compounds on Commercial Thermal Desorption Tubes and the Effect of Ozone on Sampling

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, Randy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Parra, Amanda [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Russell, Marion [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Wen-Yee [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-05-01

    Diffusive or passive sampling methods using commercially filled axial-sampling thermal desorption tubes are widely used for measuring volatile organic compounds (VOCs) in air. The passive sampling method provides a robust, cost effective way to measure air quality with time-averaged concentrations spanning up to a week or more. Sampling rates for VOCs can be calculated using tube geometry and Fick’s Law for ideal diffusion behavior or measured experimentally. There is evidence that uptake rates deviate from ideal and may not be constant over time. Therefore, experimentally measured sampling rates are preferred. In this project, a calibration chamber with a continuous stirred tank reactor design and constant VOC source was combined with active sampling to generate a controlled dynamic calibration environment for passive samplers. The chamber air was augmented with a continuous source of 45 VOCs ranging from pentane to diethyl phthalate representing a variety of chemical classes and physiochemical properties. Both passive and active samples were collected on commercially filled Tenax TA thermal desorption tubes over an 11-day period and used to calculate passive sampling rates. A second experiment was designed to determine the impact of ozone on passive sampling by using the calibration chamber to passively load five terpenes on a set of Tenax tubes and then exposing the tubes to different ozone environments with and without ozone scrubbers attached to the tube inlet. During the sampling rate experiment, the measured diffusive uptake was constant for up to seven days for most of the VOCs tested but deviated from linearity for some of the more volatile compounds between seven and eleven days. In the ozone experiment, both exposed and unexposed tubes showed a similar decline in terpene mass over time indicating back diffusion when uncapped tubes were transferred to a clean environment but there was no indication of significant loss by ozone reaction.

  7. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  8. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  9. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  10. Importance of participation rate in sampling of data in population based studies, with special reference to bone mass in Sweden.

    OpenAIRE

    Düppe, H; Gärdsell, P; Hanson, B S; Johnell, O; Nilsson, B E

    1996-01-01

    OBJECTIVE: To study the effects of participation rate in sampling on "normative" bone mass data. DESIGN: This was a comparison between two randomly selected samples from the same population. The participation rates in the two samples were 61.9% and 83.6%. Measurements were made of bone mass at different skeletal sites and of muscle strength, as well as an assessment of physical activity. SETTING: Malmö, Sweden. SUBJECTS: There were 230 subjects (117 men, 113 women), aged 21 to 42 years. RESUL...

  11. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    Directory of Open Access Journals (Sweden)

    Riza Muhida

    2013-07-01

    Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.

  12. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  13. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  14. Estimation of radon exhalation rate, natural radioactivity and radiation doses in fly ash samples from NTPC Dadri, (UP) India

    International Nuclear Information System (INIS)

    Gupta, Mamta; Verma, K.D.; Mahur, A.K.; Rajendra Prasad; Sonkawade, R.G.

    2010-01-01

    Fly ash produced by coal-burning in thermal power station has become a subject of world wide interest in recent years, because of its diverse uses in building materials such as bricks, sheets, cement and land filling etc. The knowledge of radio nuclides in fly ash plays an important role in health physics. Natural radioactivity and radon exhalation rate in fly ash samples collected from NTPC (National Thermal Power Corporation) Dadri, (UP.) India, have been studied. A high resolution gamma ray spectroscopic system has been used for the measurement of natural radioactivity. The activity concentration of natural radionuclides radium ( 226 Ra), thorium ( 232 Th) and potassium ( 40 K) were measured and radiological parameters were calculated. Radium concentration was found to vary from (81.01 ± 3.25) to (177.33 ±10.00) Bq kg -1 . Activity concentration of thorium was found to vary from (111.57 ± 3.21) to (178.50 ± 3.96) Bq kg -1 . Potassium activity was not significant in some samples, whereas, some other samples have shown potassium activity vary from (365.98 ± 4.85) to (495.95 ± 6.23) Bq kg -1 . Radon exhalation rates in these samples were also calculated by 'Sealed Can technique' using LR-115 type II detectors and found to vary from (80 ± 9) to (243 ± 16) mBqm -2 h -1 with an average value (155 ± 13) mBqm -2 h -1 . This study also presents the results of estimation of effective dose equivalent from exhalation rate, radium equivalent, absorbed gamma dose rates, external annual effective dose rate and values of external hazard index for the fly ash samples. (author)

  15. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  16. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  17. Fragile-to-fragile liquid transition at Tg and stable-glass phase nucleation rate maximum at the Kauzmann temperature TK

    International Nuclear Information System (INIS)

    Tournier, Robert F.

    2014-01-01

    An undercooled liquid is unstable. The driving force of the glass transition at T g is a change of the undercooled-liquid Gibbs free energy. The classical Gibbs free energy change for a crystal formation is completed including an enthalpy saving. The crystal growth critical nucleus is used as a probe to observe the Laplace pressure change Δp accompanying the enthalpy change −V m ×Δp at T g where V m is the molar volume. A stable glass–liquid transition model predicts the specific heat jump of fragile liquids at T≤T g , the Kauzmann temperature T K where the liquid entropy excess with regard to crystal goes to zero, the equilibrium enthalpy between T K and T g , the maximum nucleation rate at T K of superclusters containing magic atom numbers, and the equilibrium latent heats at T g and T K . Strong-to-fragile and strong-to-strong liquid transitions at T g are also described and all their thermodynamic parameters are determined from their specific heat jumps. The existence of fragile liquids quenched in the amorphous state, which do not undergo liquid–liquid transition during heating preceding their crystallization, is predicted. Long ageing times leading to the formation at T K of a stable glass composed of superclusters containing up to 147 atom, touching and interpenetrating, are evaluated from nucleation rates. A fragile-to-fragile liquid transition occurs at T g without stable-glass formation while a strong glass is stable after transition

  18. Measuring maximum and standard metabolic rates using intermittent-flow respirometry: a student laboratory investigation of aerobic metabolic scope and environmental hypoxia in aquatic breathers.

    Science.gov (United States)

    Rosewarne, P J; Wilson, J M; Svendsen, J C

    2016-01-01

    Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology. © 2016 The Fisheries Society of the British Isles.

  19. Comparsion of maximum viscosity and viscometric method for identification of irradiated sweet potato starch

    International Nuclear Information System (INIS)

    Yi, Sang Duk; Yang, Jae Seung

    2000-01-01

    A study was carried out to compare viscosity and maximum viscosity methods for the detection of irradiated sweet potato starch. The viscosity of all samples decreased by increasing stirring speeds and irradiation doses. This trend was similar for maximum viscosity. Regression coefficients and expressions of viscosity and maximum viscosity with increasing irradiation dose were 0.9823 (y=335.02e -0. 3 366x ) at 120 rpm and 0.9939 (y =-42.544x+730.26). This trend in viscosity was similar for all stirring speeds. Parameter A, B and C values showed a dose dependent relation and were a better parameter for detecting irradiation treatment than maximum viscosity and the viscosity value it self. These results suggest that the detection of irradiated sweet potato starch is possible by both the viscometric and maximum visosity method. Therefore, the authors think that the maximum viscosity method can be proposed as one of the new methods to detect the irradiation treatment for sweet potato starch

  20. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  1. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  2. Radon exhalation rates from soil and sand samples collected from the vicinity of Yamuna river

    International Nuclear Information System (INIS)

    Garg, A.K.; Sushil Kumar; Chauhan, Pooja; Chauhan, R.P.

    2011-01-01

    Soil, sand and stones are the most popular building materials for Indian dwellings. Radon is released into ambient air from these materials due to ubiquitous uranium and radium in them, thus increasing the airborne radon concentration. The radioactivity in sand and soils is related to radioactivity in the rocks from which they are formed. These materials contain varying amount of uranium. In the present investigation, the radon emanated from soil and sand samples from different locations in the vicinity of Yamuna river has been estimated. The samples have been collected from different locations near the Yamuna river. The samples collecting sites are from Yamunanagar in Haryana to Delhi. The radon concentration in different samples has been calculated, based upon the data, the mass and the surface exhalation rates of radon emanated from them have also been calculated

  3. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    Science.gov (United States)

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p 30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  4. A conrparison of optirnunl and maximum reproduction using the rat ...

    African Journals Online (AJOL)

    of pigs to increase reproduction rate of sows (te Brake,. 1978; Walker et at., 1979; Kemm et at., 1980). However, no experimental evidence exists that this strategy would in fact improve biological efficiency. In this pilot experiment, an attempt was made to compare systems of optimum or maximum reproduction using the rat.

  5. Influence of phase separation on the anaerobic digestion of glucose: maximum COD turnover rate during continuous operation

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, A; Van Andel, J G; Breure, A M; Van Deursen, A

    1980-01-01

    A mineral medium containing 1% of glucose as the main carbon source was subjected to one-phase and two-phase anaerobic digestion processes under comparable conditions. The one-phase system combined acidogenic and methanogenic populations allowing a complete conversion of the carbon source into gaseous end products and biomass. The two-phase system consists of an acid reactor and a methane reactor connected in series allowing sequential acidogenesis and methanogenesis. Performance of the one-phase system is compared with that of the two-phase system. Maximum turnover of COD was determined for each system. Maximum specific sludge loading of the two-phase system was more than three times higher than that of the one-phase system. Effects of overloading each system were determined. The eco-physiological significance of phase separation is discussed briefly. (2 diagrams, 5 graphs, 41 references, 5 tables)

  6. Biogeochemistry of the MAximum TURbidity Zone of Estuaries (MATURE): some conclusions

    NARCIS (Netherlands)

    Herman, P.M.J.; Heip, C.H.R.

    1999-01-01

    In this paper, we give a short overview of the activities and main results of the MAximum TURbidity Zone of Estuaries (MATURE) project. Three estuaries (Elbe, Schelde and Gironde) have been sampled intensively during a joint 1-week campaign in both 1993 and 1994. We introduce the publicly available

  7. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  8. Maximum power point tracking: a cost saving necessity in solar energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Enslin, J H.R. [Stellenbosch Univ. (South Africa). Dept. of Electrical and Electronic Engineering

    1992-12-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking (MPPT) can improve cost effectiveness, has a higher reliability and can improve the quality of life in remote areas. A high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of between 15 and 25% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply (RAPS) systems. The advantages at large temperature variations and high power rated systems are much higher. Other advantages include optimal sizing and system monitor and control. (author).

  9. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  10. The response of the southern Greenland ice sheet to the Holocene thermal maximum

    DEFF Research Database (Denmark)

    Larsen, Nicolaj Krog; Kjaer, Kurt H.; Lecavalier, Benoit

    2015-01-01

    contribution of 0.16 m sea-level equivalent from the entire Greenland ice sheet, with a centennial ice loss rate of as much as 100 Gt/yr for several millennia during the Holocene thermal maximum. Our results provide an estimate of the long-term rates of volume loss that can be expected in the future...

  11. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  12. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  13. Is There a Maximum Star Formation Rate in High-redshift Galaxies?

    Science.gov (United States)

    Barger, A. J.; Cowie, L. L.; Chen, C.-C.; Owen, F. N.; Wang, W.-H.; Casey, C. M.; Lee, N.; Sanders, D. B.; Williams, J. P.

    2014-03-01

    We use the James Clerk Maxwell Telescope's SCUBA-2 camera to image a 400 arcmin2 area surrounding the GOODS-N field. The 850 μm rms noise ranges from a value of 0.49 mJy in the central region to 3.5 mJy at the outside edge. From these data, we construct an 850 μm source catalog to 2 mJy containing 49 sources detected above the 4σ level. We use an ultradeep (11.5 μJy at 5σ) 1.4 GHz image obtained with the Karl G. Jansky Very Large Array together with observations made with the Submillimeter Array to identify counterparts to the submillimeter galaxies. For most cases of multiple radio counterparts, we can identify the correct counterpart from new and existing Submillimeter Array data. We have spectroscopic redshifts for 62% of the radio sources in the 9' radius highest sensitivity region (556/894) and 67% of the radio sources in the GOODS-N region (367/543). We supplement these with a modest number of additional photometric redshifts in the GOODS-N region (30). We measure millimetric redshifts from the radio to submillimeter flux ratios for the unidentified submillimeter sample, assuming an Arp 220 spectral energy distribution. We find a radio-flux-dependent K - z relation for the radio sources, which we use to estimate redshifts for the remaining radio sources. We determine the star formation rates (SFRs) of the submillimeter sources based on their radio powers and their submillimeter fluxes and find that they agree well. The radio data are deep enough to detect star-forming galaxies with SFRs >2000 M ⊙ yr-1 to z ~ 6. We find galaxies with SFRs up to ~6000 M ⊙ yr-1 over the redshift range z = 1.5-6, but we see evidence for a turn-down in the SFR distribution function above 2000 M ⊙ yr-1. The James Clerk Maxwell Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council of Canada, and (until 2013 March 31) the Netherlands Organisation for Scientific

  14. Exploring the Legionella pneumophila positivity rate in hotel water samples from Antalya, Turkey.

    Science.gov (United States)

    Sepin Özen, Nevgün; Tuğlu Ataman, Şenay; Emek, Mestan

    2017-05-01

    The genus Legionella is a fastidious Gram-negative bacteria widely distributed in natural waters and man made water supply systems. Legionella pneumophila is the aetiological agent of approximately 90% of reported Legionellosis cases, and serogroup 1 is the most frequent cause of infections. Legionnaires' disease is often associated with travel and continues to be a public health concern at present. The correct water management quality practices and rapid methods for analyzing Legionella species in environmental water is a key point for the prevention of Legionnaires' disease outbreaks. This study aimed to evaluate the positivity rates and serotyping of Legionella species from water samples in the region of Antalya, Turkey, which is an important tourism center. During January-December 2010, a total of 1403 samples of water that were collected from various hotels (n = 56) located in Antalya were investigated for Legionella pneumophila. All samples were screened for L. pneumophila by culture method according to "ISO 11731-2" criteria. The culture positive Legionella strains were serologically identified by latex agglutination test. A total of 142 Legionella pneumophila isolates were recovered from 21 (37.5%) of 56 hotels. The total frequency of L. pneumophila isolation from water samples was found as 10.1%. Serological typing of 142 Legionella isolates by latex agglutination test revealed that strains belonging to L. pneumophila serogroups 2-14 predominated in the examined samples (85%), while strains of L. pneumophila serogroup 1 were less numerous (15%). According to our knowledge, our study with the greatest number of water samples from Turkey demonstrates that L. pneumophila serogroups 2-14 is the most common isolate. Rapid isolation of L. pneumophila from environmental water samples is essential for the investigation of travel related outbreaks and the possible resources. Further studies are needed to have epidemiological data and to determine the types of L

  15. Radon concentration and exhalation rates in building material samples from crushing zone in Shivalik Foot Hills

    International Nuclear Information System (INIS)

    Pundir, Anil; Kamboj, Sunil; Bansal, Vakul; Chauhan, R.P.; Rana, Rajinder Singh

    2012-01-01

    Radon ( 222 Rn) is an inert radioactive gas in the decay chain of uranium ( 238 U). It continuously emanates from soil to the atmosphere. Radon and its progeny are the major natural radioactive sources for the ambient radioactivity on Earth. A number of studies on radon were performed in recent decades focusing on its transport and movement in the atmosphere under different meteorological conditions. Building materials are the main source of radon inside buildings. Some construction materials are naturally more radioactive and removal of such material from the earth's crust and their subsequent use in construction of buildings further enhances the radioactivity level. The knowledge of radioactivity level in the building materials makes us aware about the management, guidelines and standards in construction of buildings. The main objective of the present investigations is to measure radon Concentration and exhalation rates in the samples collected from the Crushing zone of Shivalik foot hills. Different types of materials are being used in Northern part of India for construction of dwellings. For the measurement of radon concentration and its exhalation rates in building materials, LR-115 detectors were exposed in closed plastic canisters for three months. At the end of the exposure time, the detectors were subjected to a chemical etching process in 2.5N NaOH solution. The tracks produced by the alpha particles were observed and counted under an optical Olympus microscope at 600X. The measured track density was converted into radon concentration using a calibration factor. The surface and mass exhalation rates of radon have also been calculated using present data. The results indicate that the radon concentration varies appreciably from sample to sample and they were found to satisfy the safety criteria. There are samples in which radon concentration is higher and may enhance the indoor radiation levels when used as building construction materials. (author)

  16. Evaluation of the DSM-5 severity ratings for anorexia nervosa in a clinical sample.

    Science.gov (United States)

    Dakanalis, Antonios; Alix Timko, C; Colmegna, Fabrizia; Riva, Giuseppe; Clerici, Massimo

    2018-04-01

    We examined the validity and utility of the DSM-5 severity ratings for anorexia nervosa (AN) in a clinical (treatment-seeking) sample (N = 273; 95.6% women). Participants classified with mild, moderate, severe, and extreme severity of AN based on their measured body mass index, differed significantly from each other in eating disorder features, putative maintenance factors, and illness-specific functional impairment (medium effect sizes). However, they were statistically indistinguishable in psychiatric-disorder comorbidity and distress, demographics, and age-of-AN onset. The implications of our findings, providing limited support for the DSM-5 severity ratings for AN, and directions for future research are outlined. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Electrochemical techniques implementation for corrosion rate measurement in function of humidity level in grounding systems (copper and stainless steel) in soil samples from Tunja (Colombia)

    Science.gov (United States)

    Salas, Y.; Guerrero, L.; Blanco, J.; Jimenez, C.; Vera-Monroy, S. P.; Mejía-Camacho, A.

    2017-12-01

    In this work, DC electrochemical techniques were used to determine the corrosion rate of copper and stainless-steel electrodes used in grounding, varying the level of humidity, in sandy loam and clay loam soils. The maximum corrosion potentials were: for copper -211 and -236mV and for stainless steel of -252 and -281mV, in sandy loam and clay loam respectively, showing that in sandy loam the values are higher, about 30mV. The mechanism by which steel controls corrosion is by diffusion, whereas in copper it is carried out by transfer of mass and charge, which affects the rate of corrosion, which in copper reached a maximum value of 5mm/yr and in Steel 0.8mm/yr, determined by Tafel approximations. The behaviour of the corrosion rate was mathematically adjusted to an asymptotic model that faithfully explains the C.R. as a function of humidity, however, it is necessary to define the relation between the factor □ established in the model and the precise characteristics of the soil, such as the permeability or quantity of ions present.

  18. Dose rate in a deactivated uranium mine

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Wagner S.; Kelecom, Alphonse G.A.C.; Silva, Ademir X.; Marques, José M.; Carmo, Alessander S. do; Dias, Ayandra O., E-mail: pereiraws@gmail.com, E-mail: wspereira@inb.gov.br, E-mail: lararapls@hotmail.com, E-mail: Ademir@nuclear.ufrj.br, E-mail: marqueslopes@yahoo.com.br [Universidade Veiga de Almeida (UVA), Rio de Janeiro, RJ (Brazil); Indústrias Nucleares do Brasil (COMAP.N/FCN/INB), Resende RJ (Brazil). Fábrica de Combustível Nuclear. Coordenação de Meio Ambiente e Proteção Radiológica Ambiental; Universidade Federal Fluminense (LARARA-PLS/UFF), Niterói, RJ (Brazil). Laboratório de Radiobiologia e Radiometria; Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2017-07-01

    The Ore Treatment Unit is a deactivated uranium mine and milling situated in Caldas, MG, BR. Although disabled, there are still areas considered controlled and supervised from the radiological point of view. In these areas, it is necessary to keep an occupational monitoring program to ensure the workers' safety and to prevent the dispersion of radioactive material. For area monitoring, the dose rate, in μSv∙h{sup -1}, was measured with Geiger Müller (GM) area monitors or personal electronic monitors type GM and thermoluminescence dosimetry (TLD), in mSv∙month{sup -1}, along the years 2013 to 2016. For area monitoring, 577 samples were recorded; for personal dosimeters monitoring, 2,656; and for TLD monitoring type, 5,657. The area monitoring showed a mean dose rate of 6.42 μSv∙h{sup -1} associated to a standard deviation of 48 μSv∙h{sup -1} with a maximum recorded value of 685 μSv∙h{sup -1}. 96 % of the samples were below the derived limit per hour for workers (10 μSv∙h{sup -1}). For the personal electronic monitoring, the average of the data sampled was 15.86 μSv∙h{sup -1}, associated to a standard deviation of 61.74 μSv∙h{sup -1}. 80 % of the samples were below the derived limit and the maximum recorded was 1,220 μSv∙h{sup -1}. Finally, the TLD showed a mean of 0.01 mSv∙h{sup -1} (TLD detection limit is 0.2 mSv∙month{sup -1}), associated to a standard deviation of 0.08 mSv∙h{sup -1}. 98% of the registered values were below 0.2 mSv and less than 2 % of the measurements had values above the limit of detection. The samples show areas with low risk of external exposure, as can be seen by the TLD evaluation. Specific areas with greater risk of contamination have already been identified, as well as operations at higher risks. In these cases, the use of the individual electronic dosimeter is justified for a more effective monitoring. Radioprotection identified all risks and was able to extend individual electronic monitoring to all

  19. Dose rate in a deactivated uranium mine

    International Nuclear Information System (INIS)

    Pereira, Wagner S.; Kelecom, Alphonse G.A.C.; Silva, Ademir X.; Marques, José M.; Carmo, Alessander S. do; Dias, Ayandra O.; Indústrias Nucleares do Brasil; Universidade Federal Fluminense; Coordenacao de Pos-Graduacao e Pesquisa de Engenharia

    2017-01-01

    The Ore Treatment Unit is a deactivated uranium mine and milling situated in Caldas, MG, BR. Although disabled, there are still areas considered controlled and supervised from the radiological point of view. In these areas, it is necessary to keep an occupational monitoring program to ensure the workers' safety and to prevent the dispersion of radioactive material. For area monitoring, the dose rate, in μSv∙h"-"1, was measured with Geiger Müller (GM) area monitors or personal electronic monitors type GM and thermoluminescence dosimetry (TLD), in mSv∙month"-"1, along the years 2013 to 2016. For area monitoring, 577 samples were recorded; for personal dosimeters monitoring, 2,656; and for TLD monitoring type, 5,657. The area monitoring showed a mean dose rate of 6.42 μSv∙h"-"1 associated to a standard deviation of 48 μSv∙h"-"1 with a maximum recorded value of 685 μSv∙h"-"1. 96 % of the samples were below the derived limit per hour for workers (10 μSv∙h"-"1). For the personal electronic monitoring, the average of the data sampled was 15.86 μSv∙h"-"1, associated to a standard deviation of 61.74 μSv∙h"-"1. 80 % of the samples were below the derived limit and the maximum recorded was 1,220 μSv∙h"-"1. Finally, the TLD showed a mean of 0.01 mSv∙h"-"1 (TLD detection limit is 0.2 mSv∙month"-"1), associated to a standard deviation of 0.08 mSv∙h"-"1. 98% of the registered values were below 0.2 mSv and less than 2 % of the measurements had values above the limit of detection. The samples show areas with low risk of external exposure, as can be seen by the TLD evaluation. Specific areas with greater risk of contamination have already been identified, as well as operations at higher risks. In these cases, the use of the individual electronic dosimeter is justified for a more effective monitoring. Radioprotection identified all risks and was able to extend individual electronic monitoring to all risk operations, even with the use of the TLD

  20. Petroleum production at Maximum Efficient Rate Naval Petroleum Reserve No. 1 (Elk Hills), Kern County, California. Final Supplemental Environmental Impact Statement

    Energy Technology Data Exchange (ETDEWEB)

    1993-07-01

    This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).

  1. The determination of volatile chlorinated hydrocarbons in air. Sampling rate and efficiency of diffuse samplers

    Energy Technology Data Exchange (ETDEWEB)

    Giese, U.; Stenner, H.; Kettrup, A.

    1989-05-01

    When applicating diffusive sampling-systems to workplace air-monitoring it is necessary to know the behaviour of the diffusive-rate and the efficiency in dependence of concentration, exposition time and the type of pollutant. Especially concerning mixtures of pollutants there are negative influences by competition and mutual displacement possible. Diffusive-rate and discovery for CH/sub 2/Cl/sub 2/ and CHCl/sub 3/ were investigated using two different types of diffuse samplers. For this it was necessary to develop suitable defices for standard gas generation and for the exposition of diffusive-samplers to a standard gas mixture. (orig.).

  2. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  3. Self-Rated Health in Relation to Rape and Mental Health Disorders in a National Sample of College Women

    Science.gov (United States)

    Zinzow, Heidi M.; Amstadter, Ananda B.; McCauley, Jenna L.; Ruggiero, Kenneth J.; Resnick, Heidi S.; Kilpatrick, Dean G.

    2011-01-01

    Objective: The purpose of this study was to employ a multivariate approach to examine the correlates of self-rated health in a college sample of women, with particular emphasis on sexual assault history and related mental health outcomes. Participants: A national sample of 2,000 female college students participated in a structured phone interview…

  4. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  5. Maximum heat flux in boiling in a large volume

    International Nuclear Information System (INIS)

    Bergmans, Dzh.

    1976-01-01

    Relationships are derived for the maximum heat flux qsub(max) without basing on the assumptions of both the critical vapor velocity corresponding to the zero growth rate, and planar interface. The Helmholz nonstability analysis of vapor column has been made to this end. The results of this examination have been used to find maximum heat flux for spherical, cylindric and flat plate heaters. The conventional hydrodynamic theory was found to be incapable of producing a satisfactory explanation of qsub(max) for small heaters. The occurrence of qsub(max) in the present case can be explained by inadequate removal of vapor output from the heater (the force of gravity for cylindrical heaters and surface tension for the spherical ones). In case of flat plate heater the qsub(max) value can be explained with the help of the hydrodynamic theory

  6. Detection of silver nanoparticles in parsley by solid sampling high-resolution-continuum source atomic absorption spectrometry.

    Science.gov (United States)

    Feichtmeier, Nadine S; Leopold, Kerstin

    2014-06-01

    In this work, we present a fast and simple approach for detection of silver nanoparticles (AgNPs) in biological material (parsley) by solid sampling high-resolution-continuum source atomic absorption spectrometry (HR-CS AAS). A novel evaluation strategy was developed in order to distinguish AgNPs from ionic silver and for sizing of AgNPs. For this purpose, atomisation delay was introduced as significant indication of AgNPs, whereas atomisation rates allow distinction of 20-, 60-, and 80-nm AgNPs. Atomisation delays were found to be higher for samples containing silver ions than for samples containing silver nanoparticles. A maximum difference in atomisation delay normalised by the sample weight of 6.27 ± 0.96 s mg(-1) was obtained after optimisation of the furnace program of the AAS. For this purpose, a multivariate experimental design was used varying atomisation temperature, atomisation heating rate and pyrolysis temperature. Atomisation rates were calculated as the slope of the first inflection point of the absorbance signals and correlated with the size of the AgNPs in the biological sample. Hence, solid sampling HR-CS AAS was proved to be a promising tool for identifying and distinguishing silver nanoparticles from ionic silver directly in solid biological samples.

  7. Noncircular Chainrings Do Not Influence Maximum Cycling Power.

    Science.gov (United States)

    Leong, Chee-Hoi; Elmer, Steven J; Martin, James C

    2017-12-01

    Noncircular chainrings could increase cycling power by prolonging the powerful leg extension/flexion phases, and curtailing the low-power transition phases. We compared maximal cycling power-pedaling rate relationships, and joint-specific kinematics and powers across 3 chainring eccentricities (CON = 1.0; LOW ecc  = 1.13; HIGH ecc  = 1.24). Part I: Thirteen cyclists performed maximal inertial-load cycling under 3 chainring conditions. Maximum cycling power and optimal pedaling rate were determined. Part II: Ten cyclists performed maximal isokinetic cycling (120 rpm) under the same 3 chainring conditions. Pedal and joint-specific powers were determined using pedal forces and limb kinematics. Neither maximal cycling power nor optimal pedaling rate differed across chainring conditions (all p > .05). Peak ankle angular velocity for HIGH ecc was less than CON (p pedal system allowed cyclists to manipulate ankle angular velocity to maintain their preferred knee and hip actions, suggesting maximizing extension/flexion and minimizing transition phases may be counterproductive for maximal power.

  8. Tank Farm WM-182 and WM-183 Heel Slurry Samples PSD Results

    International Nuclear Information System (INIS)

    Batcheller, T.A.; Huestis, G.M.

    2000-01-01

    Particle size distribution (PSD) analysis of INTEC Tank Farm WM-182 and WM-183 heel slurry samples were performed using a modified Horiba LA-300 PSD analyzer at the RAL facility. There were two types of testing performed: typical PSD analysis, and setting rate testing. Although the heel slurry samples were obtained from two separate vessels, the particle size distribution results were quite similar. The slurry solids were from approximately a minimum particle size of 0.5 mm to a maximum of 230 mm with about 90% of the material between 2-to-133 mm, and the cumulative 50% value at approximately 20 mm. This testing also revealed that high frequency sonication with an ultrasonic element may break-up larger particles in the WM-182 and WM-183 tank from heel slurries. This finding represents useful information regarding ultimate tank heel waste processing. Settling rate testing results were also fairly consistent with material from both vessels in that it appears that most of the mass of solids settle to an agglomerated, yet easily redispersed layer at the bottom. A dispersed and suspended material remained in the ''clear'' layer above the settled layer after about one-half an hour of settling time. This material had a statistical mode of approximately 5 mm and a maximum particle size of 30 mm

  9. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  10. Measurement of radon exhalation rates in some soil samples collected near the international monument Taj Mahal, Agra

    International Nuclear Information System (INIS)

    Sharma, Jyoti; Kumar, Rupesh; Indolia, R.S.; Swarup, R.; Mahur, A.K.; Singh, Hargyan; Sonkawade, R.G.

    2011-01-01

    Human beings are exposed to ionizing radiation from natural sources due to the occurrence of natural radioactive elements in solids, rocks, sand, soil etc. used as building construction materials and to the internal exposure from radioactive elements through good, water and air. Radon exhalation rate is of prime importance for the estimation of radiation risk from various materials. In the present study soil samples collected near the Tajmahal Agra. Sealed Can Technique was adopted for radon exhalation measurements. All the soil samples collected were grinded, dried and sieved through a 100 mesh sieve. Equal amount of each sieved (100μm grain size) sample (100 gm) was placed at the base of the Cans of 7.5 cm height and 7.0 cm diameter similar to those used in the calibration experiment (Singh et al., 1997). LR-115 type II plastic track detector (2 cm x 2 cm) was fixed on the top inside of the cylindrical Can. Radon exhalation rate varies from 529 mBqm -2 h -1 to 1254 mBqm -2 h -1 . The results will be presented. (author)

  11. Growth, chamber building rate and reproduction time of Palaeonummulites venosus under natural conditions.

    Science.gov (United States)

    Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino

    2017-04-01

    Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.

  12. Measurement of radon activity, exhalation rate and radiation dose in fly ash and coal samples from NTPC, Badarpur, Delhi, India

    International Nuclear Information System (INIS)

    Gupta, Mamta; Verma, K.D.; Mahur, A.K.; Prasad, R.; Sonkawade, R.G.

    2013-01-01

    In the present study radon activities and exhalation rates from fly ash and coal samples from NTPC (National Thermal Power Corporation) situated at Badarpur, Delhi, India, have been measured. 'Sealed Can Technique' using LR-115 type II track detectors was employed. In fly ash samples, radon activity has been found to vary from 400.0 ± 34.7 to 483.9 ± 38.1Bqm -3 with an average value of 447.1 ± 36.6 Bqm -3 and in coal samples, radon activity has been found to vary from 504.0 ± 39.0 to 932.1 ± 52.9 Bqm -3 with an average value of 687.2 ± 45.2 Bqm -3 . Radon exhalation rate from coal is found to be higher than radon exhalation rate from its ash products, whereas the opposite is expected. Indoor inhalation exposure (radon) effective dose has also been estimated. (author)

  13. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  14. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  15. A thermoelectric generator using loop heat pipe and design match for maximum-power generation

    KAUST Repository

    Huang, Bin-Juine

    2015-09-05

    The present study focuses on the thermoelectric generator (TEG) using loop heat pipe (LHP) and design match for maximum-power generation. The TEG uses loop heat pipe, a passive cooling device, to dissipate heat without consuming power and free of noise. The experiments for a TEG with 4W rated power show that the LHP performs very well with overall thermal resistance 0.35 K W-1, from the cold side of TEG module to the ambient. The LHP is able to dissipate heat up to 110W and is maintenance free. The TEG design match for maximum-power generation, called “near maximum-power point operation (nMPPO)”, is studied to eliminate the MPPT (maximum-power point tracking controller). nMPPO is simply a system design which properly matches the output voltage of TEG with the battery. It is experimentally shown that TEG using design match for maximum-power generation (nMPPO) performs better than TEG with MPPT.

  16. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  18. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  19. The maximum contraceptive prevalence 'demand curve': guiding discussions on programmatic investments.

    Science.gov (United States)

    Weinberger, Michelle; Sonneveldt, Emily; Stover, John

    2017-12-22

    Most frameworks for family planning include both access and demand interventions. Understanding how these two are linked and when each should be prioritized is difficult. The maximum contraceptive prevalence 'demand curve' was created based on a relationship between the modern contraceptive prevalence rate (mCPR) and mean ideal number of children to allow for a quantitative assessment of the balance between access and demand interventions. The curve represents the maximum mCPR that is likely to be seen given fertility intentions and related norms and constructs that influence contraceptive use. The gap between a country's mCPR and this maximum is referred to as the 'potential use gap.' This concept can be used by countries to prioritize access investments where the gap is large, and discuss implications for future contraceptive use where the gap is small. It is also used within the FP Goals model to ensure mCPR growth from access interventions does not exceed available demand.

  20. Alcohol Use, Age, and Self-Rated Mental and Physical Health in a Community Sample of Lesbian and Bisexual Women.

    Science.gov (United States)

    Veldhuis, Cindy B; Talley, Amelia E; Hancock, David W; Wilsnack, Sharon C; Hughes, Tonda L

    2017-12-01

    Given that self-perceptions of mental and physical health are important predictors of health outcomes and well-being, particularly among older adults, this study focuses on associations among age, alcohol consumption, and indicators of both self-rated mental health and self-rated physical health in a sample of sexual minority women (SMW). This study uses a community sample of SMW to examine the associations among age, drinking, and self-rated mental and physical health. Heavy drinking among older adult SMW (55+) was less prevalent than among young SMW, ages 18-25 and ages 26-39, but similar to rates reported among SMW ages 40-54. In addition, older SMW reported significantly higher levels of self-rated mental health, compared with SMW in the other age groups, but we found no significant associations between age and self-rated physical health. Across all age groups, moderate drinkers reported better self-rated physical health than alcohol abstainers. Overall, these results suggest that, among SMW, drinking does not decline as sharply with age as it does for heterosexual women in the general population. Given the current and projected increases in the aging population and the risks that heavy drinking presents for morbidity and mortality, interventions aimed at older SMW are needed.

  1. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  2. Natural radioactivity and dose rates for soil samples around Tiruchirapalli, South India using γ-ray spectrometry

    International Nuclear Information System (INIS)

    Senthilkumar, Bojarajan; Manikandan, Sabapathy; Musthafa, Mohamed Saiyad

    2012-01-01

    The activity concentrations and the gamma-absorbed dose rates of the naturally occurring radionuclides 226 Ra, 232 Th, and 40 K were determined for 40 soil samples collected from Tiruchirapalli, South India, using γ-ray spectrometry. The average activity concentrations of 226 Ra, 232 Th, and 40 K in the soil samples were found to be 29.9, 39.0, and 369.7 Bq kg -1 , respectively. The measured activity concentrations of both 226 Ra and 40 K in the soil were lower than the world average, whereas, the activity of 232 Th was higher than the world average. The concentrations of these radionuclides were also compared with the average activity of the Indian soil. The radiological hazard index was calculated and compared with the internationally approved values. The average external absorbed gamma dose rate was observed to be 79.9 nGy h -1 , with a corresponding average annual effective dose of 97.9 mSv y -1 , which was above the world average values. The values of Ra eq and H ex were found to be within the criterion limit, whereas, the radioactivity level index (l γ ) and total gamma dose rate were above the worldwide average values. (author)

  3. The Maximum Flux of Star-Forming Galaxies

    Science.gov (United States)

    Crocker, Roland M.; Krumholz, Mark R.; Thompson, Todd A.; Clutterbuck, Julie

    2018-04-01

    The importance of radiation pressure feedback in galaxy formation has been extensively debated over the last decade. The regime of greatest uncertainty is in the most actively star-forming galaxies, where large dust columns can potentially produce a dust-reprocessed infrared radiation field with enough pressure to drive turbulence or eject material. Here we derive the conditions under which a self-gravitating, mixed gas-star disc can remain hydrostatic despite trapped radiation pressure. Consistently taking into account the self-gravity of the medium, the star- and dust-to-gas ratios, and the effects of turbulent motions not driven by radiation, we show that galaxies can achieve a maximum Eddington-limited star formation rate per unit area \\dot{Σ }_*,crit ˜ 10^3 M_{⊙} pc-2 Myr-1, corresponding to a critical flux of F*, crit ˜ 1013L⊙ kpc-2 similar to previous estimates; higher fluxes eject mass in bulk, halting further star formation. Conversely, we show that in galaxies below this limit, our one-dimensional models imply simple vertical hydrostatic equilibrium and that radiation pressure is ineffective at driving turbulence or ejecting matter. Because the vast majority of star-forming galaxies lie below the maximum limit for typical dust-to-gas ratios, we conclude that infrared radiation pressure is likely unimportant for all but the most extreme systems on galaxy-wide scales. Thus, while radiation pressure does not explain the Kennicutt-Schmidt relation, it does impose an upper truncation on it. Our predicted truncation is in good agreement with the highest observed gas and star formation rate surface densities found both locally and at high redshift.

  4. Estimate of respiration rate and physicochemical changes of fresh-cut apples stored under different temperatures

    Directory of Open Access Journals (Sweden)

    Cristiane Fagundes

    2013-03-01

    Full Text Available In this study, the influence of storage temperature and passive modified packaging (PMP on the respiration rate and physicochemical properties of fresh-cut Gala apples (Malus domestica B. was investigated. The samples were packed in flexible multilayer bags and stored at 2 °C, 5 °C, and 7 °C for eleven days. Respiration rate as a function of CO2 and O2 concentrations was determined using gas chromatography. The inhibition parameters were estimated using a mathematical model based on Michaelis-Menten equation. The following physicochemical properties were evaluated: total soluble solids, pH, titratable acidity, and reducing sugars. At 2 °C, the maximum respiration rate was observed after 150 hours. At 5 °C and 7 °C the maximum respiration rates were observed after 100 and 50 hours of storage, respectively. The inhibition model results obtained showed a clear effect of CO2 on O2 consumption. The soluble solids decreased, although not significantly, during storage at the three temperatures studied. Reducing sugars and titratable acidity decreased during storage and the pH increased. These results indicate that the respiration rate influenced the physicochemical properties.

  5. A high-pressure thermal gradient block for investigating microbial activity in multiple deep-sea samples

    DEFF Research Database (Denmark)

    Kallmeyer, J.; Ferdelman, TG; Jansen, KH

    2003-01-01

    Details about the construction and use of a high-pressure thermal gradient block for the simultaneous incubation of multiple samples are presented. Most parts used are moderately priced off-the-shelf components that easily obtainable. In order to keep the pressure independent of thermal expansion....... Sulfate reduction rates increase with increasing pressure and show maximum values at pressures higher than in situ. (C) 2003 Elsevier Science B.V. All rights reserved....

  6. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  7. Measurement of solid flow rates and sampling

    International Nuclear Information System (INIS)

    Caillot, A.

    1984-01-01

    Given the fact that very fine sediments are carriers of numerous pollutant products, in order to be able to ensure realistic and vigilant control of the aquatic environment we have to take into account sedimentary transport. The movement of sediments may be due to natural events (currents, swell, winds), or to human intervention (dredging, emptying of dam reservoirs, release of wastes and so forth). Their circulation, at times highly complex, especially in estuaries, may alternate with periods of rest - and therefore periods of accumulation of pollutants - which may be fairly long. Despite the plethora of available methods and techniques, the amounts of sediment transported by drift or in suspension are very difficult to assess. The physico-chemical nature and the behaviour of these substances in water makes it awkward to select samples, in space and time, for the purpose of analysis. The sampling should be carried out with the mechanical means suited to the circumstances and to the aim in mind. However, by taking into consideration the hydrosedimentary mechanisms known by the hydrologists and sedimentologists it is possible to improve the selection of the sites to be monitored as well as to choose more carefully (and therefore to limit) the samples to be analysed. Environmental monitoring may thus be performed more efficiently and at lower cost. (author)

  8. Erich Regener and the ionisation maximum of the atmosphere

    Science.gov (United States)

    Carlson, P.; Watson, A. A.

    2014-12-01

    In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under water and in the atmosphere. Along with one of his students, Georg Pfotzer, he discovered the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be, largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students, and through his links with Rutherford's group in Cambridge, is discussed in an appendix. Regener was nominated for the Nobel Prize in Physics by Schrödinger in 1938. He died in 1955 at the age of 73.

  9. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  10. Mothers' and fathers' ratings of family relationship quality: associations with preadolescent and adolescent anxiety and depressive symptoms in a clinical sample.

    Science.gov (United States)

    Queen, Alexander H; Stewart, Lindsay M; Ehrenreich-May, Jill; Pincus, Donna B

    2013-06-01

    This study examined the independent associations among three family relationship quality factors--cohesion, expressiveness, and conflict--with youth self-reported depressive and anxiety symptoms in a clinical sample of anxious and depressed youth. Ratings of family relationship quality were obtained through both mother and father report. The sample included families of 147 preadolescents and adolescents (56.6 % female; 89.8 % Caucasian), 11-18 years old (M = 13.64, SD = 1.98) assigned a principal diagnosis of an anxiety or depressive disorder. When controlling for age and concurrent anxiety symptoms, regression analyses revealed that for boys, both father- and mother-rated family cohesion predicted depressive symptoms. For girls, mother-rated family expressiveness and conflict predicted depressive symptoms. Youth anxiety symptoms were not significantly associated with any family relationship variables, controlling for concurrent depressive symptoms. Findings suggest that parent-rated family relationship factors may be more related to youth depressive than anxiety symptoms in this clinical sample. In addition, family cohesion, as perceived by parents, may be more related to boys' depression, whereas expressiveness and conflict (as rated by mothers) may be more related to girls' depression. Clinical implications and recommendations for future research are discussed.

  11. Anaerobic methane oxidation rates at the sulfate-methane transition in marine sediments from Kattegat and Skagerrak (Denmark)

    International Nuclear Information System (INIS)

    Iversen, N.; Jorgensen, B.B.

    1985-01-01

    Concomitant radiotracer measurements were made of in situ rates of sulfate reduction and anaerobic methane oxidation in 2-3-m-long sediment cores. Methane accumulated to high concentrations (> 1 mM CH 4 ) only below the sulfate zone, at 1 m or deeper in the sediment. Sulfate reduction showed a broad maximum below the sediment surface and a smaller, narrow maximum at the sulfate-methane transition. Methane oxidation was low (0.002-0.1 nmol CH 4 cm -3 d -1 ) throughout the sulfate zone and showed a sharp maximum at the sulfate-methane transition, coinciding with the sulfate reduction maximum. Total anaerobic methane oxidation at two stations was 0.83 and 1.16 mmol CH 4 m -2 d -1 , of which 96% was confined to the sulfate-methane transition. All the methane that was calculated to diffuse up into the sulfate-methane transition was oxidized in this zone. The methane oxidation was equivalent to 10% of the electron donor requirement for the total measured sulfate reduction. A third station showed high sulfate concentrations at all depths sampled and the total methane oxidation was only 0.013 mmol m -2 d -1 . From direct measurements of rates, concentration gradients, and diffusion coefficients, simple calculations were made of sulfate and methane fluxes and of methane production rates

  12. Colour Doppler and microbubble contrast agent ultrasonography do not improve cancer detection rate in transrectal systematic prostate biopsy sampling.

    Science.gov (United States)

    Taverna, Gianluigi; Morandi, Giovanni; Seveso, Mauro; Giusti, Guido; Benetti, Alessio; Colombo, Piergiuseppe; Minuti, Francesco; Grizzi, Fabio; Graziotti, Pierpaolo

    2011-12-01

    What's known on the subject? and What does the study add? Transrectal gray-scale ultrasonography guided prostate biopsy sampling is the method for diagnosing prostate cancer (PC) in patients with an increased prostate specific antigen level and/or abnormal digital rectal examination. Several imaging strategies have been proposed to optimize the diagnostic value of biopsy sampling, although at the first biopsy nearly 10-30% of PC still remains undiagnosed. This study compares the PC detection rate when employing Colour Doppler ultransongraphy with or without the injection of SonoVue™ microbubble contrast agent, versus the transrectal ultrasongraphy-guided systematic biopsy sampling. The limited accuracy, sensitivity, specificity and the additional cost of using the contrast agent do not justify its routine application in PC detection. • To compare prostate cancer (PC) detection rate employing colour Doppler ultrasonography with or without SonoVue™ contrast agent with transrectal ultrasonography-guided systematic biopsy sampling. • A total of 300 patients with negative digital rectal examination and transrectal grey-scale ultrasonography, with PSA values ranging between 2.5 and 9.9 ng/mL, were randomized into three groups: 100 patients (group A) underwent transrectal ultrasonography-guided systematic bioptic sampling; 100 patients (group B) underwent colour Doppler ultrasonography, and 100 patients (group C) underwent colour Doppler ultrasonography before and during the injection of SonoVue™. • Contrast-enhanced targeted biopsies were sampled into hypervascularized areas of peripheral, transitional, apical or anterior prostate zones. • All the patients included in Groups B and C underwent a further 13 systematic prostate biopsies. The cancer detection rate was calculated for each group. • In 88 (29.3%) patients a histological diagnosis of PC was made, whereas 22 (7.4%) patients were diagnosed with high-grade prostatic intraepithelial

  13. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  14. Psychometric properties of a German parent rating scale for oppositional defiant and conduct disorder (FBB-SSV) in clinical and community samples.

    Science.gov (United States)

    Görtz-Dorten, Anja; Ise, Elena; Hautmann, Christopher; Walter, Daniel; Döpfner, Manfred

    2014-08-01

    The Fremdbeurteilungsbogen für Störungen des Sozialverhaltens (FBB-SSV) is a commonly used DSM- and ICD-based rating scale for disruptive behaviour problems in Germany. This study examined the psychometric properties of the FBB-SSV rated by parents in both a clinical sample (N = 596) and a community sample (N = 720) of children aged 4-17 years. Results indicate that the FBB-SSV is internally consistent (α = .69-.90). Principal component analyses produced two-factor structures that are largely consistent with the distinction between oppositional defiant disorder (ODD) and conduct disorder (CD). Diagnostic accuracy was examined using receiver operating characteristic analyses, which showed that the FBB-SSV is excellent at discriminating children with ODD/CD from those in the community sample (AUC = .91). It has satisfactory diagnostic accuracy for detecting ODD/CD in the clinical sample (AUC = .76). Overall, the results show that the FBB-SSV is a reliable and valid instrument. This finding provides further support for the clinical utility of DSM- and ICD-based rating scales.

  15. A Maximum Entropy Approach to Loss Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marco Bee

    2013-03-01

    Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

  16. Construction of radioelement and dose rate baseline maps by combining ground and airborne radiometric data

    International Nuclear Information System (INIS)

    Rybach, L.; Medici, F.; Schwarz, G.F.

    1997-01-01

    For emergency situations like nuclear accidents, lost isotopic sources, debris of reactor-powered satellites etc. well-documented baseline information is indispensable. Maps of cosmic, terrestrial natural and artificial radiation can be constructed by assembling different datasets such as ground and airborne gamma spectrometry, direct dose rate measurements, and soil/rock samples. The in situ measurements were calibrated using the soil samples taken at/around the field measurement sites, the airborne measurements by a combination of in situ, and soil/rock sample data. The radioelement concentrations (Bq/kg) were in turn converted to dose-rate (nSv/h). First, the cosmic radiation map was constructed from a digital terrain model, averaging topographic heights within cells of 2 km X 2 km size. For the terrestrial radiation a total of 1615 ground data points were available, in addition to the airborne data. The artificial radiation map (Chernobyl and earlier fallout) has the smallest data base (184 data points from airborne and ground measurements). The dose rate map was constructed by summing up the above-mentioned contributions. It relies on a data base which corresponds to a density of about 1 point per 25 km 2 . The cosmic radiation map shows elevated dose rates in the high parts of the Swiss Alps. The cosmic dose rate ranges from 40 to 190 nSv/h, depending on altitude. The terrestrial dose rate maps show general agreement with lithology: elevated dose rates (100 to 200 nSv/h) characterize the Central Massifs of the Alps where crystalline rocks give a maximum of 370 nSv/h, whereas the sedimentary northern Alpine Foreland (Jura, Molasse basin) shows consistently lower dose rates (40-100 nSv/h). The artificial radiation map has its maximum value in the southern part of Switzerland (90 nSv/h). The map of total dose rate exhibits values from 55 to 570 nSv/h. These values are considerably higher than reported in the Radiation Atlas (''Natural Sources of Ionising

  17. Thermodynamic and structural models compared with the initial dissolution rates of open-quotes SONclose quotes glass samples

    International Nuclear Information System (INIS)

    Tovena, I.; Advocat, T.; Ghaleb, D.; Vernaz, E.; Larche, F.

    1994-01-01

    The experimentally determined initial dissolution rate R 0 of nuclear glass was correlated with thermodynamic parameters and structural parameters. The initial corrosion rates of six open-quotes R7T7close quotes glass samples measured at 100 degrees C in a Soxhlet device were correlated with the glass free hydration energy and the glass formation enthalpy. These correlations were then tested with a group of 26 SON glasses selected for their wide diversity of compositions. The thermodynamic models provided a satisfactory approximation of the initial dissolution rate determined under Soxhlet conditions for SON glass samples that include up to 15 wt% of boron and some alumina. Conversely, these models are inaccurate if the boron concentration exceeds 15 wt% and the glass contains no alumina. Possible correlations between R 0 and structural parameters, such as the boron coordination number and the number of nonbridging oxygen atoms, were also investigated. The authors show that R 0 varies inversely with the number of 4-coordinate boron atoms; conversely, the results do not substantiate published reports of a correlation between R 0 and the number of nonbridging oxygen atoms

  18. The effect of sampling rate on interpretation of the temporal characteristics of radiative and convective heating in wildland flames

    Science.gov (United States)

    David Frankman; Brent W. Webb; Bret W. Butler; Daniel Jimenez; Michael Harrington

    2012-01-01

    Time-resolved radiative and convective heating measurements were collected on a prescribed burn in coniferous fuels at a sampling frequency of 500 Hz. Evaluation of the data in the time and frequency domain indicate that this sampling rate was sufficient to capture the temporal fluctuations of radiative and convective heating. The convective heating signal contained...

  19. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.

    Directory of Open Access Journals (Sweden)

    Lorenzo Asti

    2016-04-01

    Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.

  20. How fast can we learn maximum entropy models of neural populations?

    Energy Technology Data Exchange (ETDEWEB)

    Ganmor, Elad; Schneidman, Elad [Department of Neuroscience, Weizmann Institute of Science, Rehovot 76100 (Israel); Segev, Ronen, E-mail: elad.ganmor@weizmann.ac.i, E-mail: elad.schneidman@weizmann.ac.i [Department of Life Sciences and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel)

    2009-12-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  1. How fast can we learn maximum entropy models of neural populations?

    International Nuclear Information System (INIS)

    Ganmor, Elad; Schneidman, Elad; Segev, Ronen

    2009-01-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  2. Polluted soil leaching: unsaturated conditions and flow rate effects

    Directory of Open Access Journals (Sweden)

    Chourouk Mathlouthi

    2017-04-01

    Full Text Available In this study, soil samples are extracted from a polluted site at different depths. Soils texture and pollutant presence are different with depth. Preliminary analyzes showed pollution by heavy metals. To simulate soil leaching operation in static condition, a series of leaching tests are conducted in laboratory column under conditions of upflow unsaturated soil. Electrical conductivity and pH measurements on the recovered leachate are performed. Different flow rates are tested. Comparison of different profiles shows that the dissolved pollutants are concentrated in the upper soil levels and disperse weakly in the lower parts which confirm the nature of anthropogenic pollution of heavy metals. Water mobilizes a high amount of dissolved ionic substances up to 80% of the initial concentration. The increase in flow rate requires more pore volume injected to achieve the maximum clearance rate. The down flow condition extracts a small amount of dissolved substances.

  3. Reliability and validity of teacher-rated symptoms of oppositional defiant disorder and conduct disorder in a clinical sample.

    Science.gov (United States)

    Ise, Elena; Görtz-Dorten, Anja; Döpfner, Manfred

    2014-01-01

    It is recommended to use information from multiple informants when making diagnostic decisions concerning oppositional defiant disorder (ODD) and conduct disorder (CD). The purpose of this study was to investigate the reliability and validity of teacher-rated symptoms of ODD and CD in a clinical sample. The sample comprised 421 children (84% boys; 6-17 years) diagnosed with ODD, CD, and/or attention deficit hyperactivity disorder (ADHD). Teachers completed a standardized ODD/CD symptom rating scale and the Teacher Report Form (TRF). The reliability (internal consistency) of the symptom rating scale was high (α = 0.90). Convergent and divergent validity were demonstrated by substantial correlations with similar TRF syndrome scales and low-to-moderate correlations with dissimilar TRF scales. Discriminant validity was shown by the ability of the symptom rating scale to differentiate between children with ODD/CD and those with ADHD. Factorial validity was demonstrated by principal component analysis, which produced a two-factor solution that is largely consistent with the two-dimensional model of ODD and CD proposed by the Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV-TR, although some CD symptoms representing aggressive behavior loaded on the ODD dimension. These findings suggest that DSM-IV-TR-based teacher rating scales are useful instruments for assessing disruptive behavior problems in children and adolescents.

  4. Relationship between visual prostate score (VPSS and maximum flow rate (Qmax in men with urinary tract symptoms

    Directory of Open Access Journals (Sweden)

    Mazhar A. Memon

    2016-04-01

    Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.

  5. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  6. Maximum Plant Uptakes for Water, Nutrients, and Oxygen Are Not Always Met by Irrigation Rate and Distribution in Water-based Cultivation Systems.

    Science.gov (United States)

    Blok, Chris; Jackson, Brian E; Guo, Xianfeng; de Visser, Pieter H B; Marcelis, Leo F M

    2017-01-01

    Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of water and nutrients in water surpass those in substrate, even though the transport of oxygen may be more complex. Transport rates can only limit growth when they are below a rate corresponding to maximum plant uptake. Our first objective was to compare Chrysanthemum growth performance for three water-based growing systems with different irrigation. We compared; multi-point irrigation into a pond (DeepFlow); one-point irrigation resulting in a thin film of running water (NutrientFlow) and multi-point irrigation as droplets through air (Aeroponic). Second objective was to compare press pots as propagation medium with nutrient solution as propagation medium. The comparison included DeepFlow water-rooted cuttings with either the stem 1 cm into the nutrient solution or with the stem 1 cm above the nutrient solution. Measurements included fresh weight, dry weight, length, water supply, nutrient supply, and oxygen levels. To account for differences in radiation sum received, crop performance was evaluated with Radiation Use Efficiency (RUE) expressed as dry weight over sum of Photosynthetically Active Radiation. The reference, DeepFlow with substrate-based propagation, showed the highest RUE, even while the oxygen supply provided by irrigation was potentially growth limiting. DeepFlow with water-based propagation showed 15-17% lower RUEs than the reference. NutrientFlow showed 8% lower RUE than the reference, in combination with potentially limiting irrigation supply of nutrients and oxygen. Aeroponic showed RUE levels similar to the reference and Aeroponic had non-limiting irrigation supply of water, nutrients, and oxygen. Water-based propagation affected the subsequent

  7. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  8. Heart rate profile during exercise in patients with early repolarization.

    Science.gov (United States)

    Cay, Serkan; Cagirci, Goksel; Atak, Ramazan; Balbay, Yucel; Demir, Ahmet Duran; Aydogdu, Sinan

    2010-09-01

    Both early repolarization and altered heart rate profile are associated with sudden death. In this study, we aimed to demonstrate an association between early repolarization and heart rate profile during exercise. A total of 84 subjects were included in the study. Comparable 44 subjects with early repolarization and 40 subjects with normal electrocardiogram underwent exercise stress testing. Resting heart rate, maximum heart rate, heart rate increment and decrement were analyzed. Both groups were comparable for baseline characteristics including resting heart rate. Maximum heart rate, heart rate increment and heart rate decrement of the subjects in early repolarization group had significantly decreased maximum heart rate, heart rate increment and heart rate decrement compared to control group (all P decrement (multiple-adjusted OR of the risk of presence of early repolarization was 2.98 (95%CI 1.21-7.34) (P = 0.018) and 7.73 (95%CI 2.84-21.03) (P decrement compared to higher levels, respectively. Subjects with early repolarization have altered heart rate profile during exercise compared to control subjects. This can be related to sudden death.

  9. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  10. Contribution to the study of maximum levels for liquid radioactive waste disposal into continental and sea water. Treatment of some typical samples

    International Nuclear Information System (INIS)

    Bittel, R.; Mancel, J.

    1968-10-01

    The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [fr

  11. Temperature influence on the fast pyrolysis of manure samples: char, bio-oil and gases production

    Science.gov (United States)

    Fernandez-Lopez, Maria; Anastasakis, Kostas; De Jong, Wiebren; Valverde, Jose Luis; Sanchez-Silva, Luz

    2017-11-01

    Fast pyrolysis characterization of three dry manure samples was studied using a pyrolyzer. A heating rate of 600°C/s and a holding time of 10 s were selected to reproduce industrial conditions. The effect of the peak pyrolysis temperature (600, 800 and 1000°C) on the pyrolysis product yield and composition was evaluated. Char and bio-oil were gravimetrically quantified. Scanning electron microscopy (SEM) was used to analyse the char structure. H2, CH4, CO and CO2 were measured by means of gas chromatography (GC). A decrease in the char yield and an increase of the gas yield were observed when temperature increased. From 800°C on, it was observed that the char yield of samples Dig R and SW were constant, which indicated that the primary devolatilization reactions stopped. This fact was also corroborated by GC analysis. The bio-oil yield slightly increased with temperature, showing a maximum of 20.7 and 27.8 wt.% for samples Pre and SW, respectively, whereas sample Dig R showed a maximum yield of 16.5 wt.% at 800°C. CO2 and CO were the main released gases whereas H2 and CH4 production increased with temperature. Finally, an increase of char porosity was observed with temperature.

  12. THE ALFALFA H α SURVEY. I. PROJECT DESCRIPTION AND THE LOCAL STAR FORMATION RATE DENSITY FROM THE FALL SAMPLE

    International Nuclear Information System (INIS)

    Sistine, Angela Van; Salzer, John J.; Janowiecki, Steven; Sugden, Arthur; Giovanelli, Riccardo; Haynes, Martha P.; Jaskot, Anne E.; Wilcots, Eric M.

    2016-01-01

    The ALFALFA H α survey utilizes a large sample of H i-selected galaxies from the ALFALFA survey to study star formation (SF) in the local universe. ALFALFA H α contains 1555 galaxies with distances between ∼20 and ∼100 Mpc. We have obtained continuum-subtracted narrowband H α images and broadband R images for each galaxy, creating one of the largest homogeneous sets of H α images ever assembled. Our procedures were designed to minimize the uncertainties related to the calculation of the local SF rate density (SFRD). The galaxy sample we constructed is as close to volume-limited as possible, is a robust statistical sample, and spans a wide range of galaxy environments. In this paper, we discuss the properties of our Fall sample of 565 galaxies, our procedure for deriving individual galaxy SF rates, and our method for calculating the local SFRD. We present a preliminary value of log(SFRD[ M ⊙ yr −1 Mpc −3 ]) = −1.747 ± 0.018 (random) ±0.05 (systematic) based on the 565 galaxies in our Fall sub-sample. Compared to the weighted average of SFRD values around z ≈ 2, our local value indicates a drop in the global SFRD of a factor of 10.2 over that lookback time.

  13. Hydraulic limits on maximum plant transpiration and the emergence of the safety-efficiency trade-off.

    Science.gov (United States)

    Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare

    2013-04-01

    Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  14. Prospective assessment of the false positive rate of the Australian snake venom detection kit in healthy human samples.

    Science.gov (United States)

    Nimorakiotakis, Vasilios Bill; Winkel, Kenneth D

    2016-03-01

    The Snake Venom Detection Kit (SVDK; bioCSL Pty Ltd, Australia) distinguishes venom from the five most medically significant snake immunotypes found in Australia. This study assesses the rate of false positives that, by definition, refers to a positive assay finding in a sample from someone who has not been bitten by a venomous snake. Control unbroken skin swabs, simulated bite swabs and urine specimens were collected from 61 healthy adult volunteers [33 males and 28 females] for assessment. In all controls, simulated bite site and urine samples [a total of 183 tests], the positive control well reacted strongly within one minute and no test wells reacted during the ten minute incubation period. However, in two urine tests, the negative control well gave a positive reaction (indicating an uninterpretable test). A 95% confidence interval for the false positive rate, on a per-patient rate, derived from the findings of this study, would extend from 0% to 6% and, on a per-test basis, it would be 0-2%. It appears to be a very low incidence (0-6%) of intrinsic true false positives for the SVDK. The clinical impresssion of a high SVDK false positive rate may be mostly related to operator error. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  15. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  16. Measurement of the incorporation rates of four amino acids into proteins for estimating bacterial production.

    Science.gov (United States)

    Servais, P

    1995-03-01

    In aquatic ecosystems, [(3)H]thymidine incorporation into bacterial DNA and [(3)H]leucine incorporation into proteins are usually used to estimate bacterial production. The incorporation rates of four amino acids (leucine, tyrosine, lysine, alanine) into proteins of bacteria were measured in parallel on natural freshwater samples from the basin of the river Meuse (Belgium). Comparison of the incorporation into proteins and into the total macromolecular fraction showed that these different amino acids were incorporated at more than 90% into proteins. From incorporation measurements at four subsaturated concentrations (range, 2-77 nm), the maximum incorporation rates were determined. Strong correlations (r > 0.91 for all the calculated correlations) were found between the maximum incorporation rates of the different tested amino acids over a range of two orders of magnitude of bacterial activity. Bacterial production estimates were calculated using theoretical and experimental conversion factors. The productions calculated from the incorporation rates of the four amino acids were in good concordance, especially when the experimental conversion factors were used (slope range, 0.91-1.11, and r > 0.91). This study suggests that the incorporation of various amino acids into proteins can be used to estimate bacterial production.

  17. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  18. Rapid surface sampling and archival record system (RSSAR)

    International Nuclear Information System (INIS)

    Barren, E.; Bracco, A.; Dorn, S.B.

    1997-01-01

    Purpose is to develop a rapid surface (concrete, steel) contamination measurement system that will provide a ''quick-look'' indication of contamination areas, an archival record, and an automated analysis. A bulk sampling oven is also being developed. The sampling device consists of a sampling head, a quick look detector, and an archiving system (sorbent tube). The head thermally desorbs semi-volatiles, such as PCBs, oils, etc., from concrete and steel surfaces; the volatilized materials are passed through a quick-look detector. Sensitivity of the detector can be attenuated for various contaminant levels. Volatilized materials are trapped in a tube filled with adsorbent. The tubes are housed in a magazine which also archives information about sampling conditions. Analysis of the tubes can be done at a later date. The concrete sampling head is fitted with a tungsten-halogen lamp; in laboratory experiments it has extracted model contaminants by heating the top 4mm of the surface to 250 C within 100-200 s. The steel sampling head has been tested on different types of steels and has extracted model contaminants within 30 s. A mathematical model of heat and mass transport in concrete has been developed. Rate of contaminant removal is at maximum when the moisture content is about 100 kg/m 3 . The system will be useful during decontamination and decommissioning operations

  19. Personality, self-rated health, and subjective age in a life-span sample: the moderating role of chronological age.

    Science.gov (United States)

    Stephan, Yannick; Demulier, Virginie; Terracciano, Antonio

    2012-12-01

    The present study tested whether chronological age moderates the association between subjective age and self-rated health and personality in a community-dwelling life-span sample (N = 1,016; age range: 18-91 years). Self-rated health, extraversion, and openness to experience were associated with a younger subjective age at older ages. Conscientious individuals felt more mature early in life. Conscientiousness, neuroticism, and agreeableness were not related to subjective age at older ages. These findings suggest that with aging self-rated health and personality traits are increasingly important for subjective age. 2013 APA, all rights reserved

  20. Investigation of (n, 2n) reaction and fission rates in iron-shielded uranium samples bombarded by 14.9 MeV neutrons

    International Nuclear Information System (INIS)

    Shani, G.

    1976-01-01

    The effect of the thickness of iron shielding on the (n, 2n) reaction rate in a fusion reactor (hybrid) blanket is investigated. The results are compared with the fission rate-dependence. Samples of natural uranium are irradiated with 14 MeV neutrons, with iron slabs of various thickness between the neutron generator target and the samples. Both reactions are threshold reactions but the fact that the 238 U (n, 2n) reaction threshold is at 6 MeV and that of fission is at 2 MeV makes the ratio between the two very much geometry-dependent. Two geometrical effects take place, the 1/r 2 and the build-up. While the build-up affects the (n, 2n) reaction rate, the fission rate is affected more by the 1/r 2 effect. The reason is that both elastic and inelastic scattering end up with neutrons with energy above fission threshold, while only elastic scattering brings high energy neutrons to the sample and causes (n, 2n) reaction. A comparison is made with calculated results where the geometrical effects do not exist. (author)

  1. Examination of anonymous canine faecal samples provides data on endoparasite prevalence rates in dogs for comparative studies.

    Science.gov (United States)

    Hinney, Barbara; Gottwald, Michaela; Moser, Jasmine; Reicher, Bianca; Schäfer, Bhavapriya Jasmin; Schaper, Roland; Joachim, Anja; Künzel, Frank

    2017-10-15

    Several endoparasites of dogs cannot only be detrimental to their primary host but might also represent a threat to human health because of their zoonotic potential. Due to their high dog population densities, metropolitan areas can be highly endemic for such parasites. We aimed to estimate the prevalence of endoparasites in dogs in the Austrian capital of Vienna by examining a representative number of canine faecal samples and to compare the prevalences with two neighbouring peri-urban and rural regions. In addition we analysed whether the density of dog populations and cleanliness of dog zones correlated with parasite occurrence. We collected 1001 anonymous faecal samples from 55 dog zones from all 23 districts of the federal state of Vienna, as well as 480 faecal samples from the Mödling district and Wolkersdorf with a peri-urban and rural character, respectively. Faeces were examined by flotation and by Baermann technique. Additionally we evaluated 292 Viennese, 102 peri-urban and 50 rural samples for Giardia and Cryptosporidium by GiardiaFASTest ® and CryptoFASTest ® . Samples from "clean" dog zones were compared to samples from "dirty" zones. The infection rate of Toxocara was surprisingly low, ranging from 0.6% to 1.9%. Trichuris was the most frequent helminth (1.8-7.5%) and Giardia the most frequent protozoan (4.0-10.8%). Ancylostomatidae, Crenosoma, Capillaria, Taeniidae, Cystoisospora and Sarcocystis were found in 1.8-2.2%, 0-0.9%, 0-0.9%, 0-0.6%, 0.3-3.1% and 0-0.6% of the samples, respectively. Samples from "dirty" dog zones in Vienna showed a significantly higher rate of parasites overall (p=0.003) and of Trichuris (p=0.048) compared to samples from "clean" dog zones. There were no statistically significant differences in densely vs. less densely populated areas of Vienna. Samples from the rural region of Wolkersdorf had significantly higher overall parasite, Trichuris and Cystoisospora prevalences than the peri-urban Mödling district and Vienna (p

  2. Mathematical model applied to decomposition rate of RIA radiotracers: 125I-insulin used as sample model

    International Nuclear Information System (INIS)

    Mesquita, C.H. de; Hamada, M.M.

    1987-09-01

    A mathematical model is described to fit the decomposition rate of labelled RIA compounds. The model was formulated using four parameters: one parameter correlated with the radioactive decay constant; the chemical decomposition rate 'K * ' of the radiolabelled molecules; the natural chemical decomposition rate 'K' and; the fraction 'f * ' of the labelled molecules in the substrate. According to the particular values that these parameters can assume, ten cases were discussed. To determine one of these cases which fit the experimental data, three types of samples were need: radioactive; simulated radiotracer ('false radiolabelled') and; on labelled common substrate. The radioinsulin 125 I was used as an example to illustrate the model application. The experimental data substantiate that the insulin labelled according to the substorchiometric procedures and kept at freezer temperature were degraded with K=0.45% per day. (Author) [pt

  3. Maximum-performance fiber-optic irradiation with nonimaging designs.

    Science.gov (United States)

    Fang, Y; Feuermann, D; Gordon, J M

    1997-10-01

    A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.

  4. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  5. Thermal modeling of core sampling in flammable gas waste tanks. Part 1: Push-mode sampling

    International Nuclear Information System (INIS)

    Unal, C.; Stroh, K.; Pasamehmetoglu, K.O.

    1997-01-01

    The radioactive waste stored in underground storage tanks at Hanford site is routinely being sampled for waste characterization purposes. The push- and rotary-mode core sampling is one of the sampling methods employed. The waste includes mixtures of sodium nitrate and sodium nitrite with organic compounds that can produce violent exothermic reactions if heated above 160 C during core sampling. A self-propagating waste reaction would produce very high temperatures that eventually result in failure of the tank and radioactive material releases to environment. A two-dimensional thermal model based on a lumped finite volume analysis method is developed. The enthalpy of each node is calculated from the first law of thermodynamics. A flash temperature and effective contact area concept were introduced to account the interface temperature rise. No maximum temperature rise exceeding the critical value of 60 C was found in the cases studied for normal operating conditions. Several accident conditions are also examined. In these cases it was found that the maximum drill bit temperature remained below the critical reaction temperature as long as a 30 scfm purge flow is provided the push-mode drill bit during sampling in rotary mode. The failure to provide purge flow resulted in exceeding the limiting temperatures in a relatively short time

  6. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  7. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  8. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    Science.gov (United States)

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  9. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Impact of changing from staining to culture techniques on detection rates of Campylobacter spp. in routine stool samples in Chile.

    Science.gov (United States)

    Porte, Lorena; Varela, Carmen; Haecker, Thomas; Morales, Sara; Weitzel, Thomas

    2016-05-13

    Campylobacter is a leading cause of bacterial gastroenteritis, but sensitive diagnostic methods such as culture are expensive and often not available in resource limited settings. Therefore, direct staining techniques have been developed as a practical and economical alternative. We analyzed the impact of replacing Campylobacter staining with culture for routine stool examinations in a private hospital in Chile. From January to April 2014, a total of 750 consecutive stool samples were examined in parallel by Hucker stain and Campylobacter culture. Isolation rates of Campylobacter were determined and the performance of staining was evaluated against culture as the gold standard. Besides, isolation rates of Campylobacter and other enteric pathogens were compared to those of past years. Campylobacter was isolated by culture in 46 of 750 (6.1 %) stool samples. Direct staining only identified three samples as Campylobacter positive and reached sensitivity and specificity values of 6.5 and 100 %, respectively. In comparison to staining-based detection rates of previous years, we observed a significant increase of Campylobacter cases in our patients. Direct staining technique for Campylobacter had a very low sensitivity compared to culture. Staining methods might lead to a high rate of false negative results and an underestimation of the importance of campylobacteriosis. With the inclusion of Campylobacter culture, this pathogen became a leading cause of intestinal infection in our patient population.

  11. Parent-child agreement on the Behavior Rating Inventory of Executive Functioning (BRIEF) in a community sample of adolescents.

    Science.gov (United States)

    Egan, Kaitlyn N; Cohen, L Adelyn; Limbers, Christine

    2018-03-06

    Despite its widespread use, a minimal amount is known regarding the agreement between parent and youth ratings of youth's executive functioning on the Behavior Rating Inventory of Executive Functioning (BRIEF) in typically developing youth. The present study examined parent-child agreement on the BRIEF with a community sample of adolescents and their parents. Ninety-seven parent-child dyads (M age  = 13.91 years; SD = .52) completed the BRIEF self- and parent-report forms and a demographic questionnaire. Intraclass Correlation Coefficients (ICCs) and paired sample t-tests were used to evaluate agreement between self- and parent-reports on the BRIEF. Total sample ICCs indicated moderate to good parent-child agreement (0.46-0.68). Parents from the total sample reported significantly higher mean T-scores for their adolescents on Inhibit, Working Memory, Planning/Organization, Behavioral Regulation Index (BRI), Metacognition Index, and Global Executive Composite. Differences were found in regard to gender and race/ethnicity: ICCs were higher between parent-girl dyads on the scales that comprise the BRI than between parent-boy dyads. Parent-adolescent ICCs were also higher for adolescents who self-identified as White in comparison to those who identified as Non-White/Mixed Race on Emotional Control. These findings suggest gender and racial/ethnic differences should be considered when examining parent-child agreement on the BRIEF in typically developing adolescents.

  12. THE ALFALFA H α SURVEY. I. PROJECT DESCRIPTION AND THE LOCAL STAR FORMATION RATE DENSITY FROM THE FALL SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Sistine, Angela Van [Department of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI 53211 (United States); Salzer, John J.; Janowiecki, Steven [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States); Sugden, Arthur [Department of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02115 (United States); Giovanelli, Riccardo; Haynes, Martha P. [Center for Astrophysics and Planetary Science, Cornell University, Ithaca, NY 14853 (United States); Jaskot, Anne E. [Department of Astronomy, Smith College, Northampton, MA 01063 (United States); Wilcots, Eric M. [Department of Astronomy, University of Wisconsin-Madison, Madison, WI 53706 (United States)

    2016-06-10

    The ALFALFA H α survey utilizes a large sample of H i-selected galaxies from the ALFALFA survey to study star formation (SF) in the local universe. ALFALFA H α contains 1555 galaxies with distances between ∼20 and ∼100 Mpc. We have obtained continuum-subtracted narrowband H α images and broadband R images for each galaxy, creating one of the largest homogeneous sets of H α images ever assembled. Our procedures were designed to minimize the uncertainties related to the calculation of the local SF rate density (SFRD). The galaxy sample we constructed is as close to volume-limited as possible, is a robust statistical sample, and spans a wide range of galaxy environments. In this paper, we discuss the properties of our Fall sample of 565 galaxies, our procedure for deriving individual galaxy SF rates, and our method for calculating the local SFRD. We present a preliminary value of log(SFRD[ M {sub ⊙} yr{sup −1} Mpc{sup −3}]) = −1.747 ± 0.018 (random) ±0.05 (systematic) based on the 565 galaxies in our Fall sub-sample. Compared to the weighted average of SFRD values around z ≈ 2, our local value indicates a drop in the global SFRD of a factor of 10.2 over that lookback time.

  13. Data compilation of respiration, feeding, and growth rates of marine pelagic organisms

    DEFF Research Database (Denmark)

    2013-01-01

    's adaptation to the environment, with consequently less universal mass scaling properties. Data on body mass, maximum ingestion and clearance rates, respiration rates and maximum growth rates of animals living in the ocean epipelagic were compiled from the literature, mainly from original papers but also from...

  14. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  15. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  16. Gender-specific feeding rates in planktonic copepods with different feeding behavior

    DEFF Research Database (Denmark)

    van Someren Gréve, Hans; Almeda, Rodrigo; Lindegren, Martin

    2017-01-01

    Planktonic copepods have sexually dimorphic behaviors, which can cause differences in feeding efficiency between genders. Copepod feeding rates have been studied extensively but most studies have focused only on females. In this study, we experimentally quantified feeding rates of males and females...... copepods, particularly in ambush feeders, where the males must sacrifice feeding for mate searching. We conducted gender-specific functional feeding response experiments using prey of different size and motility. In most cases, gender-specific maximum ingestion and clearance rates were largely explained...... in copepods with different feeding behavior: ambush feeding (Oithona nana), feeding-current feeding (Temora longicornis) and cruising feeding (Centropages hamatus). We hypothesize that carbon-specific maximum ingestion rates are similar between genders, but that maximum clearance rates are lower for male...

  17. Radioactivity Levels And Gamma Dose Rate In Soil Samples From Federation Of Bosnia And Herzegovina

    International Nuclear Information System (INIS)

    Deljkic, D.; Kadic, I.; Ilic, Z.; Vidic, A.

    2015-01-01

    Activity concentrations of 226Ra, 232Th, 40K and 137Cs in soil samples collected from different regions of Federation of Bosnia and Herzegovina have been measured by gamma-ray spectrometry. The measured activity concentrations for these radionuclides are compared with the reported data from different other countries and it is found that measured activity concentrations are comparable with the worldwide measured average values reported by the UNSCEAR. Experimental results were obtained by using a high-purity germanium (HPGe) detector and gamma - ray spectrometry analysis system at Institute for Public Health FBiH (Radiation Protection Center). The measuring time of all soil samples was 86000 seconds. It was found that the soil specific activity ranges from 24.59 to 161.20 Bq/kg for 226Ra, from 17.60 to 66.45 Bq/kg for 232Th, from 179.50 to 598.04 Bq kg-1 for 40K and from 11.13 to 108.69 Bq/kg for 137Cs with the mean values of 62.34; 46.97; 392.76 and 51.49 Bq/kg, respectively. The radium equivalent activity in all the soil samples is lower than the safe limit (370 Bq/kg), ranges from 63.58 to 287.03 Bq/kg with the mean value of 159.71 Bq/kg. Man-made radionuclide 137Cs is also present in detectable amount in all soil samples. Presence of 137Cs indicates that the samples in this area also receive some fallout from nuclear accident in Chernobyl power plant in 1986. The value of external radiation hazard indices is found to be less than unity (mean value of 0.43). Absorbed dose rates and effective dose equivalents are also determined for the samples. The concentration of radionuclides found in the soil samples during the present study does not pose any potential health hazard to the general public. (author).

  18. Direct measurements of sample heating by a laser-induced air plasma in pre-ablation spark dual-pulse laser-induced breakdown spectroscopy (LIBS).

    Science.gov (United States)

    Register, Janna; Scaffidi, Jonathan; Angel, S Michael

    2012-08-01

    Direct measurements of temperature changes were made using small thermocouples (TC), placed near a laser-induced air plasma. Temperature changes up to ~500 °C were observed. From the measured temperature changes, estimates were made of the amount of heat absorbed per unit area. This allowed calculations to be made of the surface temperature, as a function of time, of a sample heated by the air plasma that is generated during orthogonal pre-ablation spark dual-pulse (DP) LIBS measurements. In separate experiments, single-pulse (SP) LIBS emission and sample ablation rate measurements were performed on nickel at sample temperatures ranging from room temperature to the maximum surface temperature that was calculated using the TC measurement results (500 °C). A small, but real sample temperature-dependent increase in both SP LIBS emission and the rate of sample ablation was found for nickel samples heated up to 500 °C. Comparison of DP LIBS emission enhancement values for bulk nickel samples at room temperature versus the enhanced SP LIBS emission and sample ablation rates observed as a function of increasing sample temperature suggests that sample heating by the laser-induced air plasma plays only a minor role in DP LIBS emission enhancement.

  19. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  20. Sample intake position and loading rates from nonpoint source pollution

    Science.gov (United States)

    McGuire, P. E.; Daniel, T. C.; Stoffel, D.; Andraski, B.

    1980-01-01

    Paired water samples were simultaneously activated from two different vertical positions within the approach section of a flow-control structure to determine the effect of sample intake position on nonpoint runoff parameter concentrations and subsequent event loads. Suspended solids (SS), total phosphorus (TP) and organic plus exchangeable nitrogen [(Or+Ex)-N] were consistently higher throughout each runoff event when sampled from the floor of the approach section as opposed to those samples taken at midstage. Dissolved molybdate reactive phosphorus (DMRP) and ammonium (NH4-N) concentrations did not appear to be significantly affected by the vertical difference in intake position. However, the nitrate plus nitrite nitrogen [(NO3+NO2)-N] concentrations were much higher when sampled from the midstage position. Although the concentration differences between the two methods were not appreciable, when evaluated in terms of event loads, discrepancies were evident for all parameters. Midstage sampling produced event loads for SS, TP, (Or + Ex)-N, DMRP, NH4-N, and (NO3+NO2)-N that were 44,39,35,80,71, and 181%, respectively, of floor sampling loads. Differences in loads between the two methods are attributed to the midstage position, sampling less of the bed load. The correct position will depend on the objective; however, such differences should be recognized during the design phase of the monitoring program.

  1. Maximum Throughput in a C-RAN Cluster with Limited Fronthaul Capacity

    OpenAIRE

    Duan , Jialong; Lagrange , Xavier; Guilloud , Frédéric

    2016-01-01

    International audience; Centralized/Cloud Radio Access Network (C-RAN) is a promising future mobile network architecture which can ease the cooperation between different cells to manage interference. However, the feasibility of C-RAN is limited by the large bit rate requirement in the fronthaul. This paper study the maximum throughput of different transmission strategies in a C-RAN cluster with transmission power constraints and fronthaul capacity constraints. Both transmission strategies wit...

  2. Essays on inference in economics, competition, and the rate of profit

    Science.gov (United States)

    Scharfenaker, Ellis S.

    This dissertation is comprised of three papers that demonstrate the role of Bayesian methods of inference and Shannon's information theory in classical political economy. The first chapter explores the empirical distribution of profit rate data from North American firms from 1962-2012. This chapter address the fact that existing methods for sample selection from noisy profit rate data in the industrial organization field of economics tends to be conditional on a covariate's value that risks discarding information. Conditioning sample selection instead on the profit rate data's structure by means of a two component (signal and noise) Bayesian mixture model we find the the profit rate sample to be time stationary Laplace distributed, corroborating earlier estimates of cross section distributions. The second chapter compares alternative probabilistic approaches to discrete (quantal) choice analysis and examines the various ways in which they overlap. In particular, the work on individual choice behavior by Duncan Luce and the extension of this work to quantal response problems by game theoreticians is shown to be related both to the rational inattention work of Christopher Sims through Shannon's information theory as well as to the maximum entropy principle of inference proposed physicist Edwin T. Jaynes. In the third chapter I propose a model of ``classically" competitive firms facing informational entropy constraints in their decisions to potentially enter or exit markets based on profit rate differentials. The result is a three parameter logit quantal response distribution for firm entry and exit decisions. Bayesian methods are used for inference into the the distribution of entry and exit decisions conditional on profit rate deviations and firm level data from Compustat is used to test these predictions.

  3. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  4. Excess Readmission vs Excess Penalties: Maximum Readmission Penalties as a Function of Socioeconomics and Geography.

    Science.gov (United States)

    Caracciolo, Chris; Parker, Devin; Marshall, Emily; Brown, Jeremiah

    2017-08-01

    The Hospital Readmission Reduction Program (HRRP) penalizes hospitals with "excess" readmissions up to 3% of Medicare reimbursement. Approximately 75% of eligible hospitals received penalties, worth an estimated $428 million, in fiscal year 2015. To identify demographic and socioeconomic disparities between matched and localized maximum-penalty and no-penalty hospitals. A case-control study in which cases included were hospitals to receive the maximum 3% penalty under the HRRP during the 2015 fiscal year. Controls were drawn from no-penalty hospitals and matched to cases by hospital characteristics (primary analysis) or geographic proximity (secondary analysis). A selectiion of 3383 US hospitals eligible for HRRP. Thirty-nine case and 39 control hospitals from the HRRP cohort. Socioeconomic status variables were collected by the American Community Survey. Hospital and health system characteristics were drawn from Centers for Medicare and Medicaid Services, American Hospital Association, and Dartmouth Atlas of Health Care. The statistical analysis was conducted using Student t tests. Thirty-nine hospitals received a maximum penalty. Relative to controls, maximum-penalty hospitals in counties with lower SES profiles are defined by increased poverty rates (19.1% vs 15.5%, = 0.015) and lower rates of high school graduation (82.2% vs 87.5%, = 0.001). County level age, sex, and ethnicity distributions were similar between cohorts. Cases were more likely than controls to be in counties with low socioeconomic status; highlighting potential unintended consequences of national benchmarks for phenomena underpinned by environmental factors; specifically, whether maximum penalties under the HRRP are a consequence of underperforming hospitals or a manifestation of underserved communities. © 2017 Society of Hospital Medicine

  5. The effects of a pilates-aerobic program on maximum exercise capacity of adult women

    Directory of Open Access Journals (Sweden)

    Milena Mikalački

    Full Text Available ABSTRACT Introduction: Physical exercise such as the Pilates method offers clinical benefits on the aging process. Likewise, physiologic parameters may be improved through aerobic exercise. Methods: In order to compare the differences of a Pilates-Aerobic intervention program on physiologic parameters such as the maximum heart rate (HRmax, relative maximal oxygen consumption (relative VO2max and absolute (absolute VOmax, maximum heart rate during maximal oxygen consumption (VO2max-HRmax, maximum minute volume (VE and forced vital capacity (FVC, a total of 64 adult women (active group = 48.1 ± 6.7 years; control group = 47.2 ± 7.4 years participated in the study. The physiological parameters, the maximal speed and total duration of test were measured by maximum exercise capacity testing through Bruce protocol. The HRmax was calculated by a cardio-ergometric software. Pulmonary function tests, maximal speed and total time during the physical test were performed in a treadmill (Medisoft, model 870c. Likewise, the spirometry analyzed the impact on oxygen uptake parameters, including FVC and VE. Results: The VO2max (relative and absolute, VE (all, P<0.001, VO2max-HRmax (P<0.05 and maximal speed of treadmill test (P<0.001 showed significant difference in the active group after a physical exercise interventional program. Conclusion: The present study indicates that the Pilates exercises through a continuous training program might significantly improve the cardiovascular system. Hence, mixing strength and aerobic exercises into a training program is considered the optimal mechanism for healthy aging.

  6. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  7. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  8. Task 08/41, Low temperature loop at the RA reactor, Review IV - Maximum temperature values in the samples without forced cooling; Zadatak 08/41, Niskotemperaturna petlja u reaktoru 'RA', Pregled IV - Maksimalne temperature u uzorcima bez prinudnog hladjenja

    Energy Technology Data Exchange (ETDEWEB)

    Zaric, Z [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1961-12-15

    The quantity of heat generated in the sample was calculated in the Review III. In stationary regime the heat is transferred through the air layer between the sample and the wall of the channel to the heavy water of graphite. Certain value of maximum temperature t{sub 0} is achieved in the sample. The objective of this review is determination of this temperature. [Serbo-Croat] Kolicina toplote generisana u uzorku, izracunata u pregledu III, u ravnoteznom stanju odvodi se kroz vazdusni sloj izmedju uzorka i zida kanala na tesku vodu odnosno grafit, pri cemu se u uzorku dostize izvesna maksimalna temperatura t{sub 0}. Odredjivanje ove temperature je predmet ovog pregleda.

  9. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  10. Admission rates and costs associated with emergency presentation of urolithiasis: analysis of the Nationwide Emergency Department Sample 2006-2009.

    Science.gov (United States)

    Eaton, Samuel H; Cashy, John; Pearl, Jeffrey A; Stein, Daniel M; Perry, Kent; Nadler, Robert B

    2013-12-01

    We sought to examine a large nationwide (United States) sample of emergency department (ED) visits to determine data related to utilization and costs of care for urolithiasis in this setting. Nationwide Emergency Department Sample was analyzed from 2006 to 2009. All patients presenting to the ED with a diagnosis of upper tract urolithiasis were analyzed. Admission rates and total cost were compared by region, hospital type, and payer type. Numbers are weighted estimates that are designed to approximate the total national rate. An average of 1.2 million patients per year were identified with the diagnosis of urolithiasis out of 120 million visits to the ED annually. Overall average rate of admission was 19.21%. Admission rates were highest in the Northeast (24.88%), among teaching hospitals (22.27%), and among Medicare patients (42.04%). The lowest admission rates were noted for self-pay patients (9.76%) and nonmetropolitan hospitals (13.49%). The smallest increases in costs over time were noted in the Northeast. Total costs were least in nonmetropolitan hospitals; however, more patients were transferred to other hospitals. When assessing hospital ownership status, private for-profit hospitals had similar admission rates compared with private not-for-profit hospitals (16.6% vs 15.9%); however, costs were 64% and 48% higher for ED and inpatient admission costs, respectively. Presentation of urolithiasis to the ED is common, and is associated with significant costs to the medical system, which are increasing over time. Costs and rates of admission differ by region, payer type, and hospital type, which may allow us to identify the causes for cost discrepancies and areas to improve efficiency of care delivery.

  11. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  12. 76 FR 53134 - Maximum Per Diem Rates for the Continental United States (CONUS)

    Science.gov (United States)

    2011-08-25

    ... in the per diem rate setting process enhances the Government's ability to obtain policy-compliant...-standard area (NSA): Alexandria/Leesville/Natchitoches, Louisiana (Allen, Jefferson Davis, Natchitoches... the standard CONUS designation in FY 2011. Of those locations, the following areas will once again...

  13. A Hubble Space Telescope survey for novae in M87 - III. Are novae good standard candles 15 d after maximum brightness?

    Science.gov (United States)

    Shara, Michael M.; Doyle, Trisha F.; Pagnotta, Ashley; Garland, James T.; Lauer, Tod R.; Zurek, David; Baltz, Edward A.; Goerl, Ariel; Kovetz, Attay; Machac, Tamara; Madrid, Juan P.; Mikołajewska, Joanna; Neill, J. D.; Prialnik, Dina; Welch, D. L.; Yaron, Ofer

    2018-02-01

    Ten weeks of daily imaging of the giant elliptical galaxy M87 with the Hubble Space Telescope (HST) has yielded 41 nova light curves of unprecedented quality for extragalactic cataclysmic variables. We have recently used these light curves to demonstrate that the observational scatter in the so-called maximum-magnitude rate of decline (MMRD) relation for classical novae is so large as to render the nova-MMRD useless as a standard candle. Here, we demonstrate that a modified Buscombe-de Vaucouleurs hypothesis, namely that novae with decline times t2 > 10 d converge to nearly the same absolute magnitude about two weeks after maximum light in a giant elliptical galaxy, is supported by our M87 nova data. For 13 novae with daily sampled light curves, well determined times of maximum light in both the F606W and F814W filters, and decline times t2 > 10 d we find that M87 novae display M606W,15 = -6.37 ± 0.46 and M814W,15 = -6.11 ± 0.43. If very fast novae with decline times t2 < 10 d are excluded, the distances to novae in elliptical galaxies with stellar binary populations similar to those of M87 should be determinable with 1σ accuracies of ± 20 per cent with the above calibrations.

  14. Temperature influence on the fast pyrolysis of manure samples: char, bio-oil and gases production

    Directory of Open Access Journals (Sweden)

    Fernandez-Lopez Maria

    2017-01-01

    Full Text Available Fast pyrolysis characterization of three dry manure samples was studied using a pyrolyzer. A heating rate of 600°C/s and a holding time of 10 s were selected to reproduce industrial conditions. The effect of the peak pyrolysis temperature (600, 800 and 1000°C on the pyrolysis product yield and composition was evaluated. Char and bio-oil were gravimetrically quantified. Scanning electron microscopy (SEM was used to analyse the char structure. H2, CH4, CO and CO2 were measured by means of gas chromatography (GC. A decrease in the char yield and an increase of the gas yield were observed when temperature increased. From 800°C on, it was observed that the char yield of samples Dig R and SW were constant, which indicated that the primary devolatilization reactions stopped. This fact was also corroborated by GC analysis. The bio-oil yield slightly increased with temperature, showing a maximum of 20.7 and 27.8 wt.% for samples Pre and SW, respectively, whereas sample Dig R showed a maximum yield of 16.5 wt.% at 800°C. CO2 and CO were the main released gases whereas H2 and CH4 production increased with temperature. Finally, an increase of char porosity was observed with temperature.

  15. Estimating pesticide sampling rates by the polar organic chemical integrative sampler (POCIS) in the presence of natural organic matter and varying hydrodynamic conditions

    Science.gov (United States)

    Charlestra, Lucner; Amirbahman, Aria; Courtemanch, David L.; Alvarez, David A.; Patterson, Howard

    2012-01-01

    The polar organic chemical integrative sampler (POCIS) was calibrated to monitor pesticides in water under controlled laboratory conditions. The effect of natural organic matter (NOM) on the sampling rates (Rs) was evaluated in microcosms containing -1 of total organic carbon (TOC). The effect of hydrodynamics was studied by comparing Rs values measured in stirred (SBE) and quiescent (QBE) batch experiments and a flow-through system (FTS). The level of NOM in the water used in these experiments had no effect on the magnitude of the pesticide sampling rates (p > 0.05). However, flow velocity and turbulence significantly increased the sampling rates of the pesticides in the FTS and SBE compared to the QBE (p < 0.001). The calibration data generated can be used to derive pesticide concentrations in water from POCIS deployed in stagnant and turbulent environmental systems without correction for NOM.

  16. Self-rated health in relation to rape and mental health disorders in a national sample of college women.

    Science.gov (United States)

    Zinzow, Heidi M; Amstadter, Ananda B; McCauley, Jenna L; Ruggiero, Kenneth J; Resnick, Heidi S; Kilpatrick, Dean G

    2011-01-01

    The purpose of this study was to employ a multivariate approach to examine the correlates of self-rated health in a college sample of women, with particular emphasis on sexual assault history and related mental health outcomes. A national sample of 2,000 female college students participated in a structured phone interview between January and June 2006. Interview modules assessed demographics, posttraumatic stress disorder, major depressive episode, substance use, rape experiences, and physical health. Logistic regression analyses showed that poor self-rated health was associated with low income (odds ratio [OR] = 2.70), lifetime posttraumatic stress disorder (OR = 2.47), lifetime major depressive episode (OR = 2.56), past year illicit drug use (OR = 2.48), and multiple rape history (OR = 2.25). These findings highlight the need for university mental health and medical service providers to assess for rape history, and to diagnose and treat related psychiatric problems in order to reduce physical morbidity.

  17. Heart rate monitoring mobile applications

    OpenAIRE

    Chaudhry, Beenish M.

    2016-01-01

    Total number of times a heart beats in a minute is known as the heart rate. Traditionally, heart rate was measured using clunky gadgets but these days it can be measured with a smartphone?s camera. This can help you measure your heart rate anywhere and at anytime, especially during workouts so you can adjust your workout intensity to achieve maximum health benefits. With simple and easy to use mobile app, ?Unique Heart Rate Monitor?, you can also maintain your heart rate history for personal ...

  18. Psychometric Properties of the Orgasm Rating Scale in Context of Sexual Relationship in a Spanish Sample.

    Science.gov (United States)

    Arcos-Romero, Ana Isabel; Moyano, Nieves; Sierra, Juan Carlos

    2018-05-01

    The Orgasm Rating Scale (ORS) is one of the few self-reported measures that evaluates the multidimensional subjective experience of orgasm. The objective of this study was to examine the psychometric properties of the ORS in context of sex-with-partner in a Spanish sample. We examined a sample of 842 adults from the general Spanish population (310 men, 532 women; mean age = 27.12 years, SD = 9.8). The sample was randomly divided into two, with a balanced proportion of men and women between each sub-sample. Sub-sample 1 consisted of 100 men and 200 women (33.3% and 66.6%) with a mean age of 27.77 years (SD = 10.05). Sub-sample 2 consisted of 210 men and 332 women (38.7% and 61.3%) with a mean age of 26.77 years (SD = 9.65). The ORS, together with the Sexual Opinion Survey-6 and the Massachusetts General Hospital-Sexual Functioning Questionnaire, was administered online. The survey included a consent form, in which confidentiality and anonymity were guaranteed. Based on exploratory factor analysis, we obtained a reduced 25-item version of the ORS, distributed along 4 dimensions (affective, sensory, intimacy, and rewards). We performed both exploratory factor analysis and confirmatory factor analysis. The Spanish version of the ORS had adequate values of reliability that ranged from .78-.93. The 4 factors explained 59.78% of the variance. The factor structure was invariant across gender at a configural level. Scores from the ORS positively correlated with erotophilia and sexual satisfaction. The scale was useful to differentiate between individuals with orgasmic difficulties and individuals with no difficulties. We found that individuals with orgasmic difficulties showed a lower intensity in the affective, intimacy, and sensorial manifestations of orgasm. This version of the ORS could provide an optimum measure for the clinical assessment to identify individuals with difficulties in their orgasmic capacity, thus, it could be used as screening device for orgasmic

  19. Radon exhalation and its dependence on moisture content from samples of soil and building materials

    International Nuclear Information System (INIS)

    Faheem, Munazza; Matiullah

    2008-01-01

    Indoor radon has long been recognized as a potential health hazard for mankind. Building materials are considered as one of the major sources of radon in the indoor environment. To study radon exhalation rate and its dependence on moisture content, samples of soil and some common types of building materials (sand, cement, bricks and marble) were collected from Gujranwala, Gujrat, Hafizabad, Sialkot, Mandibahauddin and Narowal districts of the Punjab province (Pakistan). After processing, samples of 200 g each were placed in plastic vessels. CR-39 based NRPB detector were placed at the top of these vessels and were then hermetically sealed. After exposing to radon for 30 days within the closed vessels, the CR-39 detectors were processed. Radon exhalation rate was found to vary from 122±19 to 681±10mBqm -2 h -1 with an average of 376±147mBqm -2 h -1 in the soil samples whereas an average of 212±34, 195±25, 231±30 and 292±35mBqm -2 h -1 was observed in bricks, sand, cement and marble samples, respectively. Dependence of exhalation on moisture content has also been studied. Radon exhalation rate was found to increase with an increase in moisture, reached its maximum value and then decreased with further increase in the water content

  20. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  1. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    Science.gov (United States)

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function

  2. MaxEnt queries and sequential sampling

    International Nuclear Information System (INIS)

    Riegler, Peter; Caticha, Nestor

    2001-01-01

    In this paper we pose the question: After gathering N data points, at what value of the control parameter should the next measurement be done? We propose an on-line algorithm which samples optimally by maximizing the gain in information on the parameters to be measured. We show analytically that the information gain is maximum for those potential measurements whose outcome is most unpredictable, i.e. for which the predictive distribution has maximum entropy. The resulting algorithm is applied to exponential analysis

  3. Instantaneous global nitrous oxide photochemical rates

    International Nuclear Information System (INIS)

    Johnston, H.S.; Serang, O.; Podolske, J.

    1979-01-01

    In recent years, vertical profiles of nitrous oxide have been measured by balloon up to midstratosphere at several latitudes between 63 0 N and 73 0 S, including one profile in the tropical zone at 9 0 N. Two rocket flights measured nitrous oxide mixing ratios at 44 and 49 km. From these experimental data plus a large amount of interpolation and extrapolation, we have estimated a global distribution of nitrous oxide up to the altitude of 50 km. With standard global distributions of oxygen and ozone we carried out instantaneous, three-dimensional, global photochemical calculations, using recently measured temperature-dependent cross sections for nitrous oxide. The altitude of maximum photolysis rate of N 2 O is about 30 km at all latitudes, and the rate of photolysis is a maximum in tropical latitudes. The altitude of maximum rate of formation of nitric oxide is latitude dependent, about 26 km at the equator, about 23 km over temperate zones, and 20 km at the summer pole. The global rate of N 2 O destruction is 6.2 x 10 27 molecules s -1 , and the global rate of formation of NO from N 2 O is 1.4 x 10 27 molecules s -1 . The global N 2 O inventory divided by the stratospheric loss rate gives a residence time of about 175 years with respect to this loss process. From the global average N 2 O profile a vertical eddy diffusion profile was derived, and this profile agrees very closely with that of Stewart and Hoffert

  4. Bayesian Analysis of the Survival Function and Failure Rate of Weibull Distribution with Censored Data

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The survival function of the Weibull distribution determines the probability that a unit or an individual will survive beyond a certain specified time while the failure rate is the rate at which a randomly selected individual known to be alive at time will die at time (. The classical approach for estimating the survival function and the failure rate is the maximum likelihood method. In this study, we strive to determine the best method, by comparing the classical maximum likelihood against the Bayesian estimators using an informative prior and a proposed data-dependent prior known as generalised noninformative prior. The Bayesian estimation is considered under three loss functions. Due to the complexity in dealing with the integrals using the Bayesian estimator, Lindley’s approximation procedure is employed to reduce the ratio of the integrals. For the purpose of comparison, the mean squared error (MSE and the absolute bias are obtained. This study is conducted via simulation by utilising different sample sizes. We observed from the study that the generalised prior we assumed performed better than the others under linear exponential loss function with respect to MSE and under general entropy loss function with respect to absolute bias.

  5. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  6. A Modified Levenberg-Marquardt Method for Nonsmooth Equations with Finitely Many Maximum Functions

    Directory of Open Access Journals (Sweden)

    Shou-qiang Du

    2008-01-01

    Full Text Available For solving nonsmooth systems of equations, the Levenberg-Marquardt method and its variants are of particular importance because of their locally fast convergent rates. Finitely many maximum functions systems are very useful in the study of nonlinear complementarity problems, variational inequality problems, Karush-Kuhn-Tucker systems of nonlinear programming problems, and many problems in mechanics and engineering. In this paper, we present a modified Levenberg-Marquardt method for nonsmooth equations with finitely many maximum functions. Under mild assumptions, the present method is shown to be convergent Q-linearly. Some numerical results comparing the proposed method with classical reformulations indicate that the modified Levenberg-Marquardt algorithm works quite well in practice.

  7. CHARACTERISTICS OF MAXIMUM PERFORMANCE OF PEDALING EXERCISE IN RECUMBENT AND SUPINE POSITIONS

    Directory of Open Access Journals (Sweden)

    Morimasa Kato

    2011-09-01

    Full Text Available To determine the characteristics of maximum pedaling performance in the recumbent and supine positions, maximum isokinetic leg muscle strength was measured in eight healthy male subjects during pedaling at three velocities (300°/s, 480°/s, and 660°/s, and maximum incremental tests were performed for each position. The maximum isokinetic muscle strength in the recumbent position was 210.0 ± 29.2 Nm at 300°/s, 158.4 ± 19.8 Nm at 480°/s, and 110.6 ± 13.2 at 660°/s. In contrast, the muscle strength in the supine position was 229.3 ± 36.7 Nm at 300°/s, 180. 7 ± 20.3 Nm at 480°/s, and 129.6 ± 14.0 Nm at 660°/s. Thus, the maximum isokinetic muscle strength showed significantly higher values in the supine position than in the recumbent position at all angular velocities. The knee and hip joint angles were measured at peak torque using a goniometer; the knee joint angle was not significantly different between both positions, whereas the hip joint angle was greater in the supine position than in the recumbent position (Supine position: 137.3 ± 9. 33 degree at 300°/s, 140.0 ± 11.13 degrees at 480°/s, and 141.0 ± 9.61 degrees at 660°/s. Recumbent position: 99.5 ± 12.21 degrees at 300°/s, 101.6 ± 12.29 degrees at 480°/s, and 105.8 ± 14.28 degrees at 660°/s. Peak oxygen uptake was higher in the recumbent position (50.3 ± 4.43 ml·kg-1·min-1 than in the supine position (48.7 ± 5.10 ml·kg-1·min-1. At maximum exertion, the heart rate and whole-body rate of perceived exertion (RPE were unaffected by position, but leg muscle RPE was higher in the supine position (19.5 ± 0.53 than in the recumbent position (18.8 ± 0.71. These results suggest that the supine position is more suitable for muscle strength exertion than the recumbent position, and this may be due to different hip joint angles between the positions. On the contrary, the endurance capacity was higher in the recumbent position than in the supine position. Since leg muscle

  8. Pb-210 behaviour in environmental samples from the Cuban east in 1993

    International Nuclear Information System (INIS)

    Perez Tamayo, L.; Suarez Pina, W.

    1996-01-01

    A method based in alpha and beta gross counting, is applied to research behaviour, during 1993, of Pb-210 in atmospheric depositional samples of six sites of the Cuban east. In five of these points, located distant from Cuban centers, the average Pb-210 aport to measured beta activity is predominant (80 +- 20 %) and maximum (> 50 % annual) is observed in coincidence with the first rain full peak. Such regularity may be explained, if assumption is made that principal source of Pb-210 are the continental air masses penetrating the insular atmosphere in winter and that mentioned radionuclide is deposit later with the spring rainfalls. In the sixth point, located at industrial zone, the alpha-beta activities rates are abnormally high, with maximums of 1.5 and 2.4 faced with peak raining and wet months respectively and probably originated by the input atmosphere of rich s in alpha emisors particles from industrial processes

  9. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  10. Does shoe heel design influence ground reaction forces and knee moments during maximum lunges in elite and intermediate badminton players?

    Directory of Open Access Journals (Sweden)

    Wing-Kai Lam

    Full Text Available Lunge is one frequently executed movement in badminton and involves a unique sagittal footstrike angle of more than 40 degrees at initial ground contact compared with other manoeuvres. This study examined if the shoe heel curvature design of a badminton shoe would influence shoe-ground kinematics, ground reaction forces, and knee moments during lunge.Eleven elite and fifteen intermediate players performed five left-forward maximum lunge trials with Rounded Heel Shoe (RHS, Flattened Heel Shoe (FHS, and Standard Heel Shoes (SHS. Shoe-ground kinematics, ground reaction forces, and knee moments were measured by using synchronized force platform and motion analysis system. A 2 (Group x 3 (Shoe ANOVA with repeated measures was performed to determine the effects of different shoes and different playing levels, as well as the interaction of two factors on all variables.Shoe effect indicated that players demonstrated lower maximum vertical loading rate in RHS than the other two shoes (P < 0.05. Group effect revealed that elite players exhibited larger footstrike angle, faster approaching speed, lower peak horizontal force and horizontal loading rates but higher vertical loading rates and larger peak knee flexion and extension moments (P < 0.05. Analysis of Interactions of Group x Shoe for maximum and mean vertical loading rates (P < 0.05 indicated that elite players exhibited lower left maximum and mean vertical loading rates in RHS compared to FHS (P < 0.01, while the intermediate group did not show any Shoe effect on vertical loading rates.These findings indicate that shoe heel curvature would play some role in altering ground reaction force impact during badminton lunge. The differences in impact loads and knee moments between elite and intermediate players may be useful in optimizing footwear design and training strategy to minimize the potential risks for impact related injuries in badminton.

  11. Bayesian modeling of the assimilative capacity component of nutrient total maximum daily loads

    Science.gov (United States)

    Faulkner, B. R.

    2008-08-01

    Implementing stream restoration techniques and best management practices to reduce nonpoint source nutrients implies enhancement of the assimilative capacity for the stream system. In this paper, a Bayesian method for evaluating this component of a total maximum daily load (TMDL) load capacity is developed and applied. The joint distribution of nutrient retention metrics from a literature review of 495 measurements was used for Monte Carlo sampling with a process transfer function for nutrient attenuation. Using the resulting histograms of nutrient retention, reference prior distributions were developed for sites in which some of the metrics contributing to the transfer function were measured. Contributing metrics for the prior include stream discharge, cross-sectional area, fraction of storage volume to free stream volume, denitrification rate constant, storage zone mass transfer rate, dispersion coefficient, and others. Confidence of compliance (CC) that any given level of nutrient retention has been achieved is also determined using this approach. The shape of the CC curve is dependent on the metrics measured and serves in part as a measure of the information provided by the metrics to predict nutrient retention. It is also a direct measurement, with a margin of safety, of the fraction of export load that can be reduced through changing retention metrics. For an impaired stream in western Oklahoma, a combination of prior information and measurement of nutrient attenuation was used to illustrate the proposed approach. This method may be considered for TMDL implementation.

  12. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  13. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    International Nuclear Information System (INIS)

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  14. Measurement of Strain and Strain Rate during the Impact of Tennis Ball Cores

    Directory of Open Access Journals (Sweden)

    Ben Lane

    2018-03-01

    Full Text Available The aim of this investigation was to establish the strains and strain rates experienced by tennis ball cores during impact to inform material characterisation testing and finite element modelling. Three-dimensional surface strains and strain rates were measured using two high-speed video cameras and corresponding digital image correlation software (GOM Correlate Professional. The results suggest that material characterisation testing to a maximum strain of 0.4 and a maximum rate of 500 s−1 in tension and to a maximum strain of −0.4 and a maximum rate of −800 s−1 in compression would encapsulate the demands placed on the material during impact and, in turn, define the range of properties required to encapsulate the behavior of the material during impact, enabling testing to be application-specific and strain-rate-dependent properties to be established and incorporated in finite element models.

  15. Optimal operating conditions for maximum biogas production in anaerobic bioreactors

    International Nuclear Information System (INIS)

    Balmant, W.; Oliveira, B.H.; Mitchell, D.A.; Vargas, J.V.C.; Ordonez, J.C.

    2014-01-01

    The objective of this paper is to demonstrate the existence of optimal residence time and substrate inlet mass flow rate for maximum methane production through numerical simulations performed with a general transient mathematical model of an anaerobic biodigester introduced in this study. It is herein suggested a simplified model with only the most important reaction steps which are carried out by a single type of microorganisms following Monod kinetics. The mathematical model was developed for a well mixed reactor (CSTR – Continuous Stirred-Tank Reactor), considering three main reaction steps: acidogenesis, with a μ max of 8.64 day −1 and a K S of 250 mg/L, acetogenesis, with a μ max of 2.64 day −1 and a K S of 32 mg/L, and methanogenesis, with a μ max of 1.392 day −1 and a K S of 100 mg/L. The yield coefficients were 0.1-g-dry-cells/g-pollymeric compound for acidogenesis, 0.1-g-dry-cells/g-propionic acid and 0.1-g-dry-cells/g-butyric acid for acetogenesis and 0.1 g-dry-cells/g-acetic acid for methanogenesis. The model describes both the transient and the steady-state regime for several different biodigester design and operating conditions. After model experimental validation, a parametric analysis was performed. It was found that biogas production is strongly dependent on the input polymeric substrate and fermentable monomer concentrations, but fairly independent of the input propionic, acetic and butyric acid concentrations. An optimisation study was then conducted and optimal residence time and substrate inlet mass flow rate were found for maximum methane production. The optima found were very sharp, showing a sudden drop of methane mass flow rate variation from the observed maximum to zero, within a 20% range around the optimal operating parameters, which stresses the importance of their identification, no matter how complex the actual bioreactor design may be. The model is therefore expected to be a useful tool for simulation, design, control and

  16. Heart rate monitoring mobile applications.

    Science.gov (United States)

    Chaudhry, Beenish M

    2016-01-01

    Total number of times a heart beats in a minute is known as the heart rate. Traditionally, heart rate was measured using clunky gadgets but these days it can be measured with a smartphone's camera. This can help you measure your heart rate anywhere and at anytime, especially during workouts so you can adjust your workout intensity to achieve maximum health benefits. With simple and easy to use mobile app, 'Unique Heart Rate Monitor', you can also maintain your heart rate history for personal reflection and sharing with a provider.

  17. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    Science.gov (United States)

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  18. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  19. Determination of respiration rates in water with sub-micromolar oxygen concentrations

    Directory of Open Access Journals (Sweden)

    Emilio Garcia-Robledo

    2016-11-01

    Full Text Available It is crucial for our study and understanding of element transformations in low-oxygen waters that we are able to reproduce the in situ conditions during laboratory incubations to an extent that does not result in unacceptable artefacts. In this study we have explored how experimental conditions affect measured rates of O2 consumption in low-O2 waters from the anoxic basin of Golfo Dulce (Costa Rica and oceanic waters off Chile-Peru. High-sensitivity optode dots placed within all-glass incubation containers allowed for high resolution O2 concentration measurements in the nanomolar and low µmolar range and thus also for the determination of rates of oxygen consumption by microbial communities. Consumption rates increased dramatically (from 3 and up to 60 times by prolonged incubations, and started to increase after 4-5 hours in surface waters and after 10-15 h in water from below the upper mixed layer. Estimated maximum growth rates during the incubations suggest the growth of opportunistic microorganism with doubling times as low as 2.8 and 4.6 h for the coastal waters of Golfo Dulce (Costa Rica and oceanic waters off Chile and Peru, respectively. Deoxygenation by inert gas bubbling led to increases in subsequently determined rates, possibly by liberation of organics from lysis of sensitive organisms, particle or aggregate alterations or other processes mediated by the strong turbulence. Stirring of the water during the incubation led to an about 50% increase in samples previously deoxygenated by bubbling, but had no effect in untreated samples. Our data indicate that data for microbial activity obtained by short incubations of minimally manipulated water are most reliable, but deoxygenation is a prerequisite for many laboratory experiments, such as determination of denitrification rates, as O2 contamination by sampling is practically impossible to avoid.

  20. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2009-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author).

  1. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2013-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author).

  2. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2012-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author).

  3. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2009-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author)

  4. Flow rate and source reservoir identification from airborne chemical sampling of the uncontrolled Elgin platform gas release

    Science.gov (United States)

    Lee, James D.; Mobbs, Stephen D.; Wellpott, Axel; Allen, Grant; Bauguitte, Stephane J.-B.; Burton, Ralph R.; Camilli, Richard; Coe, Hugh; Fisher, Rebecca E.; France, James L.; Gallagher, Martin; Hopkins, James R.; Lanoiselle, Mathias; Lewis, Alastair C.; Lowry, David; Nisbet, Euan G.; Purvis, Ruth M.; O'Shea, Sebastian; Pyle, John A.; Ryerson, Thomas B.

    2018-03-01

    An uncontrolled gas leak from 25 March to 16 May 2012 led to evacuation of the Total Elgin wellhead and neighbouring drilling and production platforms in the UK North Sea. Initially the atmospheric flow rate of leaking gas and condensate was very poorly known, hampering environmental assessment and well control efforts. Six flights by the UK FAAM chemically instrumented BAe-146 research aircraft were used to quantify the flow rate. The flow rate was calculated by assuming the plume may be modelled by a Gaussian distribution with two different solution methods: Gaussian fitting in the vertical and fitting with a fully mixed layer. When both solution methods were used they compared within 6 % of each other, which was within combined errors. Data from the first flight on 30 March 2012 showed the flow rate to be 1.3 ± 0.2 kg CH4 s-1, decreasing to less than half that by the second flight on 17 April 2012. δ13CCH4 in the gas was found to be -43 ‰, implying that the gas source was unlikely to be from the main high pressure, high temperature Elgin gas field at 5.5 km depth, but more probably from the overlying Hod Formation at 4.2 km depth. This was deemed to be smaller and more manageable than the high pressure Elgin field and hence the response strategy was considerably simpler. The first flight was conducted within 5 days of the blowout and allowed a flow rate estimate within 48 h of sampling, with δ13CCH4 characterization soon thereafter, demonstrating the potential for a rapid-response capability that is widely applicable to future atmospheric emissions of environmental concern. Knowledge of the Elgin flow rate helped inform subsequent decision making. This study shows that leak assessment using appropriately designed airborne plume sampling strategies is well suited for circumstances where direct access is difficult or potentially dangerous. Measurements such as this also permit unbiased regulatory assessment of potential impact, independent of the emitting

  5. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  6. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  7. Optimization of operating parameters and rate of uranium bioleaching from a low-grade ore

    International Nuclear Information System (INIS)

    Rashidi, A.; Roosta-Azad, R.; Safdari, S.J.

    2014-01-01

    In this study the bioleaching of a low-grade uranium ore containing 480 ppm uranium has been reported. The studies involved extraction of uranium using Acidithiobacillus ferrooxidans derived from the uranium mine samples. The maximum specific growth rate (μ max ) and doubling time (t d ) were obtained 0.08 h -1 and 8.66 h, respectively. Parameters such as Fe 2+ concentration, particle size, temperature and pH were optimized. The effect of pulp density (PD) was also studied. Maximum uranium bio-dissolution of 100 ± 5 % was achieved under the conditions of pH 2.0, 5 % PD and 35 deg C in 48 h with the particles of d 80 = 100 μm. The optimum concentration of supplementary Fe 2+ was dependent to the PD. This value was 0 and 10 g of FeSO 4 ·7H 2 O/l at the PD of 5 and 15 %, respectively. The effects of time, pH and PD on the bioleaching process were studied using central composite design. New rate equation was improved for the uranium leaching rate. The rate of leaching is controlled with the concentrations of ferric and ferrous ions in solution. This study shows that uranium bioleaching may be an important process for the Saghand U mine at Yazd (Iran). (author)

  8. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  9. Standard values of maximum tongue pressure taken using newly developed disposable tongue pressure measurement device.

    Science.gov (United States)

    Utanohara, Yuri; Hayashi, Ryo; Yoshikawa, Mineka; Yoshida, Mitsuyoshi; Tsuga, Kazuhiro; Akagawa, Yasumasa

    2008-09-01

    It is clinically important to evaluate tongue function in terms of rehabilitation of swallowing and eating ability. We have developed a disposable tongue pressure measurement device designed for clinical use. In this study we used this device to determine standard values of maximum tongue pressure in adult Japanese. Eight hundred fifty-three subjects (408 male, 445 female; 20-79 years) were selected for this study. All participants had no history of dysphagia and maintained occlusal contact in the premolar and molar regions with their own teeth. A balloon-type disposable oral probe was used to measure tongue pressure by asking subjects to compress it onto the palate for 7 s with maximum voluntary effort. Values were recorded three times for each subject, and the mean values were defined as maximum tongue pressure. Although maximum tongue pressure was higher for males than for females in the 20-49-year age groups, there was no significant difference between males and females in the 50-79-year age groups. The maximum tongue pressure of the seventies age group was significantly lower than that of the twenties to fifties age groups. It may be concluded that maximum tongue pressures were reduced with primary aging. Males may become weaker with age at a faster rate than females; however, further decreases in strength were in parallel for male and female subjects.

  10. The influence of stress, depression, and anxiety on PSA screening rates in a nationally representative sample.

    Science.gov (United States)

    Kotwal, Ashwin A; Schumm, Phil; Mohile, Supriya G; Dale, William

    2012-12-01

    Prostate-specific antigen (PSA) testing for prostate cancer is controversial, with concerning rates of both overscreening and underscreening. The reasons for the observed rates of screening are unknown, and few studies have examined the relationship of psychological health to PSA screening rates. Understanding this relationship can help guide interventions to improve informed decision-making for screening. A nationally representative sample of men 57-85 years old without prostate cancer (N = 1169) from the National Social life, Health and Aging Project was analyzed. The independent relationship of validated psychological health scales measuring stress, anxiety, and depression to PSA testing rates was assessed using multivariable logistic regression analyses. PSA screening rates were significantly lower for men with higher perceived stress [odds ratio (OR) = 0.76, P = 0.006], but not for higher depressive symptoms (OR = 0.89, P = 0.22) when accounting for stress. Anxiety influences PSA screening through an interaction with number of doctor visits (P = 0.02). Among the men who visited the doctor once those with higher anxiety were less likely to be screened (OR = 0.65, P = 0.04). Conversely, those who visited the doctor 10+ times with higher anxiety were more likely to be screened (OR = 1.71, P = 0.04). Perceived stress significantly lowers PSA screening likelihood, and it seems to partly mediate the negative relationship of depression with screening likelihood. Anxiety affects PSA screening rates differently for men with different numbers of doctor visits. Interventions to influence PSA screening rates should recognize the role of the patients' psychological state to improve their likelihood of making informed decisions and improve screening appropriateness.

  11. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  12. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  13. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  14. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  15. 12 CFR 619.9170 - Fixed interest rate.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Fixed interest rate. 619.9170 Section 619.9170 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM DEFINITIONS § 619.9170 Fixed interest rate. The rate of interest specified in the note or loan document which will prevail as the maximum...

  16. ICD-11 Prevalence Rates of Posttraumatic Stress Disorder and Complex Posttraumatic Stress Disorder in a German Nationwide Sample.

    Science.gov (United States)

    Maercker, Andreas; Hecker, Tobias; Augsburger, Mareike; Kliem, Sören

    2018-04-01

    Prevalence rates are still lacking for posttraumatic stress disorder (PTSD) and complex PTSD (CPTSD) diagnoses based on the new ICD-11 criteria. In a nationwide representative German sample (N = 2524; 14-99 years), exposure to traumatic events and symptoms of PTSD or CPTSD were assessed with the International Trauma Questionnaire. A clinical variant of CPTSD with a lower threshold for core PTSD symptoms was also calculated, in addition to conditional prevalence rates dependent on trauma type and differential predictors. One-month prevalence rates were as follows: PTSD, 1.5%; CPTSD, 0.5%; and CPTSD variant, 0.7%. For PTSD, the highest conditional prevalence was associated with kidnapping or rape, and the highest CPTSD rates were associated with sexual childhood abuse or rape. PTSD and CPTSD were best differentiated by sexual violence. Combined PTSD and CPTSD (ICD-11) rates were in the range of previously reported prevalences for unified PTSD (Diagnostic and Statistical Manual of Mental Disorders, 4th Edition; ICD-10). Evidence on differential predictors of PTSD and CPTSD is still preliminary.

  17. The Influence of Creatine Monohydrate on Strength and Endurance After Doing Physical Exercise With Maximum Intensity

    Directory of Open Access Journals (Sweden)

    Asrofi Shicas Nabawi

    2017-11-01

    Full Text Available The purpose of this study was: (1 to analyze the effect of creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (2 to analyze the effect of non creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (3 to analyze the results of the difference by administering creatine and non creatine on strength and endurance after exercise with maximum intensity. This type of research used in this research was quantitative with quasi experimental research methods. The design of this study was using pretest and posttest control group design, and data analysis was using a paired sample t-test. The process of data collection was done with the test leg muscle strength using a strength test with back and leg dynamometer, sit ups test with 1 minute sit ups, push ups test with push ups and 30 seconds with a VO2max test cosmed quart CPET during the pretest and posttest. Furthermore, the data were analyzed using SPSS 22.0 series. The results showed: (1 There was the influence of creatine administration against the strength after doing exercise with maximum intensity; (2 There was the influence of creatine administration against the group endurance after doing exercise with maximum intensity; (3 There was the influence of non creatine against the force after exercise maximum intensity; (4 There was the influence of non creatine against the group after endurance exercise maximum intensity; (5 The significant difference with the provision of non creatine and creatine from creatine group difference delta at higher against the increased strength and endurance after exercise maximum intensity. Based on the above analysis, it can be concluded that the increased strength and durability for each of the groups after being given a workout.

  18. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  19. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  20. Associations between labial and whole salivary flow rates, systemic diseases and medications in a sample of older people

    DEFF Research Database (Denmark)

    Smidt, Dorte; Torpet, Lis Andersen; Nauntofte, Birgitte

    2010-01-01

    Smidt D, Torpet LA, Nauntofte B, Heegaard KM, Pedersen AML. Associations between labial and whole salivary flow rates, systemic diseases and medications in a sample of older people. Community Dent Oral Epidemiol 2010; 38: 422-435. © 2010 John Wiley & Sons A/S Abstract - Objective: To investigate...... the associations between age, gender, systemic diseases, medications and labial and whole salivary flow rates in older people. Methods: Unstimulated labial (LS) and unstimulated (UWS) and chewing-stimulated (SWS) whole salivary flow rates were measured in 389 randomly selected community-dwelling Danish women...... and 279 men aged 65-97 years. Systemic diseases, medications (coded according to the Anatomical Therapeutic Chemical (ATC) Classification System), tobacco and alcohol consumption were registered. Results: The number of diseases and medications was higher and UWS lower in the older age groups. On average...

  1. Maximum Feedrate Interpolator for Multi-axis CNC Machining with Jerk Constraints

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2012-01-01

    A key role of the CNC is to perform the feedrate interpolation which means to generate the setpoints for each machine tool axis. The aim of the VPOp algorithm is to make maximum use of the machine tool respecting both tangential and axis jerk on rotary and linear axes. The developed algorithm uses an iterative constraints intersection approach. At each sampling period, all the constraints given by each axis are expressed and by intersecting all of them the allowable interval for the next poin...

  2. Elemental composition of cosmic rays using a maximum likelihood method

    International Nuclear Information System (INIS)

    Ruddick, K.

    1996-01-01

    We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)

  3. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  4. What if the Diatoms of the Deep Chlorophyll Maximum Can Ascend?

    Science.gov (United States)

    Villareal, T. A.

    2016-02-01

    Buoyancy regulation is an integral part of diatom ecology via its role in sinking rates and is fundamental to understanding their distribution and abundance. Numerous studies have documented the effects of size and nutrition on sinking rates. Many pelagic diatoms have low intrinsic sinking rates when healthy and nutrient-replete (deep chlorophyll maximum. The potential for ascending behavior adds an additional layer of complexity by allowing both active depth regulation similar to that observed in flagellated taxa and upward transport by some fraction of deep euphotic zone diatom blooms supported by nutrient injection. In this talk, I review the data documenting positive buoyancy in small diatoms, offer direct visual evidence of ascending behavior in common diatoms typical of both oceanic and coastal zones, and note the characteristics of sinking rate distributions within a single species. Buoyancy control leads to bidirectional movement at similar rates across a wide size spectrum of diatoms although the frequency of ascending behavior may be only a small portion of the individual species' abundance. While much remains to be learned, the paradigm of unidirectional downward movement by diatoms is both inaccurate and an oversimplification.

  5. Analysis of monazite samples

    International Nuclear Information System (INIS)

    Kartiwa Sumadi; Yayah Rohayati

    1996-01-01

    The 'monazit' analytical program has been set up for routine work of Rare Earth Elements analysis in the monazite and xenotime minerals samples. Total relative error of the analysis is very low, less than 2.50%, and the reproducibility of counting statistic and stability of the instrument were very excellent. The precision and accuracy of the analytical program are very good with the maximum percentage relative are 5.22% and 1.61%, respectively. The mineral compositions of the 30 monazite samples have been also calculated using their chemical constituents, and the results were compared to the grain counting microscopic analysis

  6. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  7. Paddle River Dam : review of probable maximum flood

    Energy Technology Data Exchange (ETDEWEB)

    Clark, D. [UMA Engineering Ltd., Edmonton, AB (Canada); Neill, C.R. [Northwest Hydraulic Consultants Ltd., Edmonton, AB (Canada)

    2008-07-01

    The Paddle River Dam was built in northern Alberta in the mid 1980s for flood control. According to the 1999 Canadian Dam Association (CDA) guidelines, this 35 metre high, zoned earthfill dam with a spillway capacity sized to accommodate a probable maximum flood (PMF) is rated as a very high hazard. At the time of design, it was estimated to have a peak flow rate of 858 centimetres. A review of the PMF in 2002 increased the peak flow rate to 1,890 centimetres. In light of a 2007 revision of the CDA safety guidelines, the PMF was reviewed and the inflow design flood (IDF) was re-evaluated. This paper discussed the levels of uncertainty inherent in PMF determinations and some difficulties encountered with the SSARR hydrologic model and the HEC-RAS hydraulic model in unsteady mode. The paper also presented and discussed the analysis used to determine incremental damages, upon which a new IDF of 840 m{sup 3}/s was recommended. The paper discussed the PMF review, modelling methodology, hydrograph inputs, and incremental damage of floods. It was concluded that the PMF review, involving hydraulic routing through the valley bottom together with reconsideration of the previous runoff modeling provides evidence that the peak reservoir inflow could reasonably be reduced by approximately 20 per cent. 8 refs., 5 tabs., 8 figs.

  8. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  9. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  10. Parametric characteristics of a solar thermophotovoltaic system at the maximum efficiency

    International Nuclear Information System (INIS)

    Liao, Tianjun; Chen, Xiaohang; Yang, Zhimin; Lin, Bihong; Chen, Jincan

    2016-01-01

    Graphical abstract: A model of the far-field TPVC driven by solar energy, which consists of an optical concentrator, an absorber, an emitter, and a PV cell and is simply referred as to the far-field STPVS. - Highlights: • A model of the far-field solar thermophotovoltaic system (STPVS) is established. • External and internal irreversible losses are considered. • The maximum efficiency of the STPVS is calculated. • Optimal values of key parameters at the maximum efficiency are determined. • Effects of the concentrator factor on the performance of the system are discussed. - Abstract: A model of the solar thermophotovoltaic system (STPVS) consisting of an optical concentrator, a thermal absorber, an emitter, and a photovoltaic (PV) cell is proposed, where the far-field thermal emission between the emitter and the PV cell, the radiation losses from the absorber and emitter to the environment, the reflected loss from the absorber, and the finite-rate heat exchange between the PV cell and the environment are taken into account. Analytical expressions for the power output of and overall efficiency of the STPVS are derived. By solving thermal equilibrium equations, the operating temperatures of the emitter and PV cell are determined and the maximum efficiency of the system is calculated numerically for given values of the output voltage of the PV cell and the ratio of the front surface area of the absorber to that of the emitter. For different bandgaps, the maximum efficiencies of the system are calculated and the corresponding optimum values of several operating parameters are obtained. The effects of the concentrator factor on the optimum performance of the system are also discussed.

  11. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  13. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Bomble, Yannick J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); St. John, Peter C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Crowley, Michael F [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  14. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  15. Application of Tryptophan Fluorescence Bandwidth-Maximum Plot in Analysis of Monoclonal Antibody Structure.

    Science.gov (United States)

    Huang, Cheng-Yen; Hsieh, Ming-Ching; Zhou, Qinwei

    2017-04-01

    Monoclonal antibodies have become the fastest growing protein therapeutics in recent years. The stability and heterogeneity pertaining to its physical and chemical structures remain a big challenge. Tryptophan fluorescence has been proven to be a versatile tool to monitor protein tertiary structure. By modeling the tryptophan fluorescence emission envelope with log-normal distribution curves, the quantitative measure can be exercised for the routine characterization of monoclonal antibody overall tertiary structure. Furthermore, the log-normal deconvolution results can be presented as a two-dimensional plot with tryptophan emission bandwidth vs. emission maximum to enhance the resolution when comparing samples or as a function of applied perturbations. We demonstrate this by studying four different monoclonal antibodies, which show the distinction on emission bandwidth-maximum plot despite their similarity in overall amino acid sequences and tertiary structures. This strategy is also used to demonstrate the tertiary structure comparability between different lots manufactured for one of the monoclonal antibodies (mAb2). In addition, in the unfolding transition studies of mAb2 as a function of guanidine hydrochloride concentration, the evolution of the tertiary structure can be clearly traced in the emission bandwidth-maximum plot.

  16. Surface Uplift Rate Constrained by Multiple Terrestrial Cosmogenic Nuclides: Theory and Application from the Central Andean Plateau

    Science.gov (United States)

    McPhillips, D. F.; Hoke, G. D.; Niedermann, S.; Wittmann, H.

    2015-12-01

    There is widespread interest in quantifying the growth and decay of topography. However, prominent methods for quantitative determinations of paleoelevation rely on assumptions that are often difficult to test. For example, stable isotope paleoaltimetry relies on the knowledge of past lapse rates and moisture sources. Here, we demonstrate how cosmogenic 10Be - 21Ne and/or 10Be - 26Al sample pairs can be applied to provide independent estimates of surface uplift rate using both published data and new data from the Atacama Desert. Our approach requires a priori knowledge of the maximum age of exposure of the sampled surface. Ignimbrite surfaces provide practical sampling targets. When erosion is very slow (roughly, ≤1 m/Ma), it is often possible to constrain paleo surface uplift rate with precision comparable to that of stable isotopic methods (approximately ±50%). The likelihood of a successful measurement is increased by taking n samples from a landscape surface and solving for one regional paleo surface uplift rate and n local erosion rates. In northern Chile, we solve for surface uplift and erosion rates using three sample groups from the literature (Kober et al., 2007). In the two lower elevation groups, we calculate surface uplift rates of 110 (+60/-12) m/Myr and 160 (+120/-6) m/Myr and estimate uncertainties with a bootstrap approach. The rates agree with independent estimates derived from stream profile analyses nearby (Hoke et al., 2007). Our calculated uplift rates correspond to total uplift of 1200 and 850 m, respectively, when integrated over appropriate timescales. Erosion rates were too high to reliably calculate the uplift rate in the third, high elevation group. New cosmogenic nuclide analyses from the Atacama Desert are in progress, and preliminary results are encouraging. In particular, a replicate sample in the vicinity of the first Kober et al. (2007) group independently yields a surface uplift rate of 110 m/Myr. Compared to stable isotope

  17. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    International Nuclear Information System (INIS)

    Scogin, J. H.

    2016-01-01

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  18. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    Energy Technology Data Exchange (ETDEWEB)

    Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-24

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  19. Application of the maximum entropy production principle to electrical systems

    International Nuclear Information System (INIS)

    Christen, Thomas

    2006-01-01

    For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated

  20. Maximum And Minimum Temperature Trends In Mexico For The Last 31 Years

    Science.gov (United States)

    Romero-Centeno, R.; Zavala-Hidalgo, J.; Allende Arandia, M. E.; Carrasco-Mijarez, N.; Calderon-Bustamante, O.

    2013-05-01

    Based on high-resolution (1') daily maps of the maximum and minimum temperatures in Mexico, an analysis of the last 31-year trends is performed. The maps were generated using all the available information from more than 5,000 stations of the Mexican Weather Service (Servicio Meteorológico Nacional, SMN) for the period 1979-2009, along with data from the North American Regional Reanalysis (NARR). The data processing procedure includes a quality control step, in order to eliminate erroneous daily data, and make use of a high-resolution digital elevation model (from GEBCO), the relationship between air temperature and elevation by means of the average environmental lapse rate, and interpolation algorithms (linear and inverse-distance weighting). Based on the monthly gridded maps for the mentioned period, the maximum and minimum temperature trends calculated by least-squares linear regression and their statistical significance are obtained and discussed.

  1. Parallelization of maximum likelihood fits with OpenMP and CUDA

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; Pantaleo, F

    2011-01-01

    Data analyses based on maximum likelihood fits are commonly used in the high energy physics community for fitting statistical models to data samples. This technique requires the numerical minimization of the negative log-likelihood function. MINUIT is the most common package used for this purpose in the high energy physics community. The main algorithm in this package, MIGRAD, searches the minimum by using the gradient information. The procedure requires several evaluations of the function, depending on the number of free parameters and their initial values. The whole procedure can be very CPU-time consuming in case of complex functions, with several free parameters, many independent variables and large data samples. Therefore, it becomes particularly important to speed-up the evaluation of the negative log-likelihood function. In this paper we present an algorithm and its implementation which benefits from data vectorization and parallelization (based on OpenMP) and which was also ported to Graphics Processi...

  2. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency.

    Science.gov (United States)

    Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes

    2012-03-01

    Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  3. Evaluation of radon related parameters in environmental samples from Jazan city, Saudi Arabia

    Directory of Open Access Journals (Sweden)

    M. Abo-Elmagd

    2018-01-01

    The overall weighted mean of areal exhalation rate (EA and effective Radium content (Raeff for soil samples collected from different 10 districts in Jazan city is 17.02 ± 2.06 Bq m−2d−1, 3.01 ± 0.37 Bq kg−1 respectively. For 20 Building materials samples, EA = 1.989 ± 1.056 Bq m−2d−1 and Raeff = 0.351 ± 0.186 Bq kg−1. Finally, for decorative materials (23 samples, EA = 1.225 ± 0.136 Bq m−2d−1 and Raeff = 0.427 ± 0.031 Bq kg−1. The maximum values of the measured parameters are found in the soil of scheme 5 and 6 district, red sand (building material and gypsum (decorative material. As the mass of the sample increase, more and more radon diffused back into the sample and the measured effective radium content is reduced. After correcting the results for back diffusion effect, all masses approximately get the same value of effective radium content and then reduced the uncertainty in the weighted mean.

  4. Global view of F-region electron density and temperature at solar maximum

    International Nuclear Information System (INIS)

    Brace, L.H.; Theis, R.F.; Hoegy, W.R.

    1982-01-01

    Dynamics Explorer-2 is permitting the first measurements of the global structure of the F-regions at very high levels of solar activity (S>200). Selected full orbits of Langmuir probe measurements of electron temperature, T/sub e/, and density, N/sub e/, are shown to illustrate this global structure and some of the ionospheric features that are the topic of other papers in this issue. The ionospheric thermal structure is of particular interest because T/sub e/ is a sensitive indicator of the coupling of magnetospheric energy into the upper atmosphere. A comparison of these heating effects with those observed at solar minimum shows that the magnetospheric sources are more important at solar maximum, as might have been expected. Heating at the cusp, the auroral oval and the plasma-pause is generally both greater and more variable. Electron cooling rate calculations employing low latitude measurements indicate that solar extreme ultraviolet heating of the F region at solar maximum is enhanced by a factor that is greater than the increase in solar flux. Some of this enhanced electron heating arises from the increase in electron heating efficiency at the higher N/sub e/ of solar maximum, but this appears insufficient to completely resolve the discrepancy

  5. Does runoff or temperature control chemical weathering rates?

    International Nuclear Information System (INIS)

    Eiriksdottir, Eydis Salome; Gislason, Sigurdur Reynir; Oelkers, Eric H.

    2011-01-01

    Highlights: → The rate chemical weathering is affected by both temperature and runoff. Separating out these two factors is challenging because runoff tends to increase with increasing temperature. → In this study, natural river water samples collected on basaltic catchments over a five year period are used together with experimentally derived dissolution rate model for basaltic glass to pull apart the effects of runoff and temperature. → This study shows that the rate of chemical denudation is controlled by both temperature and runoff, but is dominated by runoff. - Abstract: The rate of chemical denudation is controlled by both temperature and runoff. The relative role of these two factors in the rivers of NE Iceland is determined through the rigorous analysis of their water chemistry over a 5-a period. River catchments are taken to be analogous to laboratory flow reactors; like the fluid in flow reactors, the loss of each dissolved element in river water is the sum of that of the original rainwater plus that added from kinetically controlled dissolution and precipitation reactions. Consideration of the laboratory determined dissolution rate behaviour of basalts and measured water chemistry indicates that the maximum effect of changing temperature on chemical denudation in the NE Icelandic rivers was 5-25% of the total change, whereas that of runoff was 75-95%. The bulk of the increased denudation rates with runoff appear to stem from an increase in reactive surface area for chemical weathering of catchment solids.

  6. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  7. A microcomputer controlled sample changer system for γ-ray spectroscopy

    International Nuclear Information System (INIS)

    Jost, D.T.; Kraehenbuehl, U.; Gunten, H.R. von

    1982-01-01

    A Z-80 based microcomputer is used to control a sample changer system connected to two γ-ray spectrometers. Samples are changed according to preselected counting criteria (maximum time and/or desired precision). Special precautions were taken to avoid the loss of information and of samples in case of a power failure. (orig.)

  8. A discussion about maximum uranium concentration in digestion solution of U3O8 type uranium ore concentrate

    International Nuclear Information System (INIS)

    Xia Dechang; Liu Chao

    2012-01-01

    On the basis of discussing the influence of single factor on maximum uranium concentration in digestion solution,the influence degree of some factors such as U content, H 2 O content, mass ratio of P and U was compared and analyzed. The results indicate that the relationship between U content and maximum uranium concentration in digestion solution was direct ratio, while the U content increases by 1%, the maximum uranium concentration in digestion solution increases by 4.8%-5.7%. The relationship between H 2 O content and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 46.1-55.2 g/L while H 2 O content increases by 1%. The relationship between mass ratio of P and U and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 116.0-181.0 g/L while the mass ratio of P and U increase 0.1%. When U content equals 62.5% and the influence of mass ratio of P and U is no considered, the maximum uranium concentration in digestion solution equals 1 578 g/L; while mass ratio of P and U equals 0.35%, the maximum uranium concentration decreases to 716 g/L, the decreased rate is 54.6%, so the mass ratio of P and U in U 3 O 8 type uranium ore concentrate is the main controlling factor. (authors)

  9. Synchronizing data from irregularly sampled sensors

    Science.gov (United States)

    Uluyol, Onder

    2017-07-11

    A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.

  10. Temperature effects on lithium-nitrogen reaction rates

    International Nuclear Information System (INIS)

    Ijams, W.J.; Kazimi, M.S.

    1985-08-01

    A series of experiments have been run with the aim of measuring the reaction rate of lithium and nitrogen over a wide spectrum of lithium pool temperatures. In these experiments, pure nitrogen was blown at a controlled flow rate over a preheated lithium pool. The pool had a surface area of approximately 4 cm 2 and a total volume of approximately 6 cm 3 . The system pressure varied from 0 to 4 psig. The reaction rate was very small - approximately 0.002 to 0.003 g Li min cm 2 for lithium temperatures below 500 0 C. Above 500 0 C the reaction rate began to increase sharply, and reached a maximum of approximately 0.80 g Li min cm 2 above 700 0 C. It dropped off beyond 1000 0 C and seemed to approach zero at 1150 0 C. The maximum reaction rate observed in these forced convection experiments was higher by 60% than those previously observed in experiments where the nitrogen flowed to the reaction site by means of natural convection. During a reaction, a hard nitride layer built up on the surface of the lithium pool - its effect on the reaction rate was observed. The effect of the nitrogen flow rate on the reaction rate was also observed

  11. The Hengill geothermal area, Iceland: Variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G. R.

    1995-04-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area S. Iceland, a dominantly basaltic area. The likely strain rate calculated from thermal and tectonic considerations is 10 -15 s -1, and temperature measurements from four drill sites within the area indicate average, near-surface geothermal gradients of up to 150 °C km -1 throughout the upper 2 km. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ± 50 °C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes located highly accurately by performing a simultaneous inversion for three-dimensional structure and hypocentral parameters. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. Beneath the high-temperature part of the geothermal area, the maximum depth of earthquakes may be as shallow as 4 km. The geothermal gradient below drilling depths in various parts of the area ranges from 84 ± 9 °Ckm -1 within the low-temperature geothermal area of the transform zone to 138 ± 15 °Ckm -1 below the centre of the high-temperature geothermal area. Shallow maximum depths of earthquakes and therefore high average geothermal gradients tend to correlate with the intensity of the geothermal area and not with the location of the currently active spreading axis.

  12. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  13. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  14. Improving the maximum transmission distance of continuous-variable quantum key distribution with noisy coherent states using a noiseless amplifier

    International Nuclear Information System (INIS)

    Wang, Tianyi; Yu, Song; Zhang, Yi-Chen; Gu, Wanyi; Guo, Hong

    2014-01-01

    By employing a nondeterministic noiseless linear amplifier, we propose to increase the maximum transmission distance of continuous-variable quantum key distribution with noisy coherent states. With the covariance matrix transformation, the expression of secret key rate under reverse reconciliation is derived against collective entangling cloner attacks. We show that the noiseless linear amplifier can compensate the detrimental effect of the preparation noise with an enhancement of the maximum transmission distance and the noise resistance. - Highlights: • Noiseless amplifier is applied in noisy coherent state quantum key distribution. • Negative effect of preparation noise is compensated by noiseless amplification. • Maximum transmission distance and noise resistance are both enhanced

  15. An automatic sample changer for gamma spectrometry

    International Nuclear Information System (INIS)

    Andrews, D.J.

    1984-01-01

    An automatic sample changer for gamma spectrometry is described which is designed for large-volume, low radioactivity environmental samples of various sizes up to maximum dimensions 100 mm diameter x 60 mm high. The sample changer is suitable for use with most existing gamma spectrometry systems which utilize GeLi or NaI detectors in vertical mode, in conjunction with a pulse height analyzer having auto-cycle and suitable data output facilities; it is linked to a Nuclear Data ND 6620 computer-based analysis system. (U.K.)

  16. Increasing preferred step rate during running reduces plantar pressures.

    Science.gov (United States)

    Gerrard, James M; Bonanno, Daniel R

    2018-01-01

    Increasing preferred step rate during running is a commonly used strategy in the management of running-related injuries. This study investigated the effect of different step rates on plantar pressures during running. Thirty-two healthy runners ran at a comfortable speed on a treadmill at five step rates (preferred, ±5%, and ±10%). For each step rate, plantar pressure data were collected using the pedar-X in-shoe system. Compared to running with a preferred step rate, a 10% increase in step rate significantly reduced peak pressure (144.5±46.5 vs 129.3±51 kPa; P=.033) and maximum force (382.3±157.6 vs 334.0±159.8 N; P=.021) at the rearfoot, and reduced maximum force (426.4±130.4 vs 400.0±116.6 N; P=.001) at the midfoot. In contrast, a 10% decrease in step rate significantly increased peak pressure (144.5±46.5 vs 161.5±49.3 kPa; P=.011) and maximum force (382.3±157.6 vs 425.4±155.3 N; P=.032) at the rearfoot. Changing step rate by 5% provided no effect on plantar pressures, and no differences in plantar pressures were observed at the medial forefoot, lateral forefoot or hallux between the step rates. This study's findings indicate that increasing preferred step rate by 10% during running will reduce plantar pressures at the rearfoot and midfoot, while decreasing step rate by 10% will increase plantar pressures at the rearfoot. However, changing preferred step rate by 5% will provide no effect on plantar pressures, and forefoot pressures are unaffected by changes in step rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Correlation of resistance and thermogravimetric measurements of the Er/sub 1/Ba/sub 2/Cu/sub 3/O/sub 9-δ/ superconductor to sample preparation techniques

    International Nuclear Information System (INIS)

    Lee, S.I.; Golben, J.P.; Song, Y.; Chen, X.D.; McMichael, R.D.; Gaines, J.R.

    1987-01-01

    The resistance dependence and thermogravimetric analysis (TGA) of Er/sub 1/Ba/sub 2/Cu/sub 3/O/sub 9-δ/ has been measured in the temperature range 27 C to 920 CV. The heat treatments and oxygen flow rates simulated actual sintering and annealing processes used in sample preparation. Evidence of a phase transition in Er/sub 1/Ba/sub 2/Cu/sub 3/O/sub 9-δ/ near 680 C is discussed, as well as the implications of the maximum oxygen uptake near 400 C. The impact of sample preparation procedures on sample features is also discussed

  18. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  19. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    Science.gov (United States)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for

  20. Statistical variability and confidence intervals for planar dose QA pass rates

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2011-11-15

    techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.

  1. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices.

    Science.gov (United States)

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli

    2017-09-12

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.

  2. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency

    Directory of Open Access Journals (Sweden)

    Sérgio Luiz Gomes Antunes

    2012-03-01

    Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  3. Rate of degradation of lambda-cyhalothrin and methomyl in grapes (Vitis vinifera L.).

    Science.gov (United States)

    Banerjee, Kaushik; Upadhyay, Ajay Kumar; Adsule, Pandurang G; Patil, Sangram H; Oulkar, Dasharath P; Jadhav, Deepak R

    2006-10-01

    Rates of degradation of lambda-cyhalothrin and methomyl residues in grape are reported. The dissipation behavior of both insecticides followed first-order rate kinetics with similar patterns at standard and double-dose applications. Residues of lambda-cyhalothrin were lost with pre-harvest intervals (PHI) of 12.0-12.5 and 15.0-15.5 days, corresponding to the applications at 25 and 50 g a.i. ha-1, respectively. In the case of methomyl, residues were lost with PHI of 55.0 and 61.0 days, following applications at 1 and 2 kg a.i. ha-1, respectively. The PHI, recommended on the basis of the experimental results, was shown to be effective in minimizing residue load of these insecticides below their maximum residue limits (MRLs) in vineyard samples.

  4. Impact of dissolution on the sedimentary record of the Paleocene-Eocene thermal maximum

    Science.gov (United States)

    Bralower, Timothy J.; Kelly, D. Clay; Gibbs, Samantha; Farley, Kenneth; Eccles, Laurie; Lindemann, T. Logan; Smith, Gregory J.

    2014-09-01

    The input of massive amounts of carbon to the atmosphere and ocean at the Paleocene-Eocene Thermal Maximum (PETM; ˜55.53 Ma) resulted in pervasive carbonate dissolution at the seafloor. At many sites this dissolution also penetrated into the underlying sediment column. The magnitude of dissolution at and below the seafloor, a process known as chemical erosion, and its effect on the stratigraphy of the PETM, are notoriously difficult to constrain. Here, we illuminate the impact of dissolution by analyzing the complete spectrum of sedimentological grain sizes across the PETM at three deep-sea sites characterized by a range of bottom water dissolution intensity. We show that the grain size spectrum provides a measure of the sediment fraction lost during dissolution. We compare these data with dissolution and other proxy records, electron micrograph observations of samples and lithology. The complete data set indicates that the two sites with slower carbonate accumulation, and less active bioturbation, are characterized by significant chemical erosion. At the third site, higher carbonate accumulation rates, more active bioturbation, and possibly winnowing have limited the impacts of dissolution. However, grain size data suggest that bioturbation and winnowing were not sufficiently intense to diminish the fidelity of isotopic and microfossil assemblage records.

  5. Sample preparation composite and replicate strategy for assay of solid oral drug products.

    Science.gov (United States)

    Harrington, Brent; Nickerson, Beverly; Guo, Michele Xuemei; Barber, Marc; Giamalva, David; Lee, Carlos; Scrivens, Garry

    2014-12-16

    In pharmaceutical analysis, the results of drug product assay testing are used to make decisions regarding the quality, efficacy, and stability of the drug product. In order to make sound risk-based decisions concerning drug product potency, an understanding of the uncertainty of the reportable assay value is required. Utilizing the most restrictive criteria in current regulatory documentation, a maximum variability attributed to method repeatability is defined for a drug product potency assay. A sampling strategy that reduces the repeatability component of the assay variability below this predefined maximum is demonstrated. The sampling strategy consists of determining the number of dosage units (k) to be prepared in a composite sample of which there may be a number of equivalent replicate (r) sample preparations. The variability, as measured by the standard error (SE), of a potency assay consists of several sources such as sample preparation and dosage unit variability. A sampling scheme that increases the number of sample preparations (r) and/or number of dosage units (k) per sample preparation will reduce the assay variability and thus decrease the uncertainty around decisions made concerning the potency of the drug product. A maximum allowable repeatability component of the standard error (SE) for the potency assay is derived using material in current regulatory documents. A table of solutions for the number of dosage units per sample preparation (r) and number of replicate sample preparations (k) is presented for any ratio of sample preparation and dosage unit variability.

  6. A rotation-symmetric, position-sensitive annular detector for maximum counting rates

    International Nuclear Information System (INIS)

    Igel, S.

    1993-12-01

    The Germanium Wall is a semiconductor detector system containing up to four annular position sensitive ΔE-detectors from high purity germanium (HPGe) planned to complement the BIG KARL spectrometer in COSY experiments. The first diode of the system, the Quirl-detector, has a two dimensional position sensitive structure defined by 200 Archimedes' spirals on each side with opposite orientation. In this way about 40000 pixels are defined. Since each spiral element detects almost the same number of events in an experiment the whole system can be optimized for maximal counting rates. This paper describes a test setup for a first prototype of the Quirl-detector and the results of test measurements with an α-source. The detector current and the electrical separation of the spiral elements were measured. The splitting of signals due to the spread of charge carriers produced by an incident ionizing particle on several adjacent elements was investigated in detail and found to be twice as high as expected from calculations. Its influence on energy and position resolution is discussed. Electronic crosstalk via signal wires and the influence of noise from the magnetic spectrometer has been tested under experimental conditions. Additionally, vacuum feedthroughs based on printed Kapton foils pressed between Viton seals were fabricated and tested successfully concerning their vacuum and thermal properties. (orig.)

  7. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  8. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  9. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  10. Biased representation of disturbance rates in the roadside sampling frame in boreal forests: implications for monitoring design

    Directory of Open Access Journals (Sweden)

    Steven L. Van Wilgenburg

    2015-12-01

    Full Text Available The North American Breeding Bird Survey (BBS is the principal source of data to inform researchers about the status of and trend for boreal forest birds. Unfortunately, little BBS coverage is available in the boreal forest, where increasing concern over the status of species breeding there has increased interest in northward expansion of the BBS. However, high disturbance rates in the boreal forest may complicate roadside monitoring. If the roadside sampling frame does not capture variation in disturbance rates because of either road placement or the use of roads for resource extraction, biased trend estimates might result. In this study, we examined roadside bias in the proportional representation of habitat disturbance via spatial data on forest "loss," forest fires, and anthropogenic disturbance. In each of 455 BBS routes, the area disturbed within multiple buffers away from the road was calculated and compared against the area disturbed in degree blocks and BBS strata. We found a nonlinear relationship between bias and distance from the road, suggesting forest loss and forest fires were underrepresented below 75 and 100 m, respectively. In contrast, anthropogenic disturbance was overrepresented at distances below 500 m and underrepresented thereafter. After accounting for distance from road, BBS routes were reasonably representative of the degree blocks they were within, with only a few strata showing biased representation. In general, anthropogenic disturbance is overrepresented in southern strata, and forest fires are underrepresented in almost all strata. Similar biases exist when comparing the entire road network and the subset sampled by BBS routes against the amount of disturbance within BBS strata; however, the magnitude of biases differed. Based on our results, we recommend that spatial stratification and rotating panel designs be used to spread limited BBS and off-road sampling effort in an unbiased fashion and that new BBS routes

  11. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  12. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  13. The Parent Version of the Preschool Social Skills Rating System: Psychometric Analysis and Adaptation with a German Preschool Sample

    Science.gov (United States)

    Hess, Markus; Scheithauer, Herbert; Kleiber, Dieter; Wille, Nora; Erhart, Michael; Ravens-Sieberer, Ulrike

    2014-01-01

    The Social Skills Rating System (SSRS) developed by Gresham and Elliott (1990) is a multirater, norm-referenced instrument measuring social skills and adaptive behavior in preschool children. The aims of the present study were (a) to test the factorial structure of the Parent Form of the SSRS for the first time with a German preschool sample (391…

  14. Quantitative Compton suppression spectrometry at elevated counting rates

    International Nuclear Information System (INIS)

    Westphal, G.P.; Joestl, K.; Schroeder, P.; Lauster, R.; Hausch, E.

    1999-01-01

    For quantitative Compton suppression spectrometry the decrease of coincidence efficiency with counting rate should be made negligible to avoid a virtual increase of relative peak areas of coincident isomeric transitions with counting rate. To that aim, a separate amplifier and discriminator has been used for each of the eight segments of the active shield of a new well-type Compton suppression spectrometer, together with an optimized, minimum dead-time design of the anticoincidence logic circuitry. Chance coincidence losses in the Compton suppression spectrometer are corrected instrumentally by comparing the chance coincidence rate to the counting rate of the germanium detector in a pulse-counting Busy circuit (G.P. Westphal, J. Rad. Chem. 179 (1994) 55) which is combined with the spectrometer's LFC counting loss correction system. The normally not observable chance coincidence rate is reconstructed from the rates of germanium detector and scintillation detector in an auxiliary coincidence unit, after the destruction of true coincidence by delaying one of the coincidence partners. Quantitative system response has been tested in two-source measurements with a fixed reference source of 60 Co of 14 kc/s, and various samples of 137 Cs, up to aggregate counting rates of 180 kc/s for the well-type detector, and more than 1400 kc/s for the BGO shield. In these measurements, the net peak areas of the 1173.3 keV line of 60 Co remained constant at typical values of 37 000 with and 95 000 without Compton suppression, with maximum deviations from the average of less than 1.5%

  15. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  16. Maximum total organic carbon limits at different DWPF melter feed maters (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1996-01-01

    The document presents information on the maximum total organic carbon (TOC) limits that are allowable in the DWPF melter feed without forming a potentially flammable vapor in the off-gas system were determined at feed rates varying from 0.7 to 1.5 GPM. At the maximum TOC levels predicted, the peak concentration of combustible gases in the quenched off-gas will not exceed 60 percent of the lower flammable limit during a 3X off-gas surge, provided that the indicated melter vapor space temperature and the total air supply to the melter are maintained. All the necessary calculations for this study were made using the 4-stage cold cap model and the melter off-gas dynamics model. A high-degree of conservatism was included in the calculational bases and assumptions. As a result, the proposed correlations are believed to by conservative enough to be used for the melter off-gas flammability control purposes

  17. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  18. Maximum mass ratio of am CVn-type binary systems and maximum white dwarf mass in ultra-compact x-ray binaries (addendum - Serb. Astron. J. No. 183 (2011, 63

    Directory of Open Access Journals (Sweden)

    Arbutina B.

    2012-01-01

    Full Text Available We recalculated the maximum white dwarf mass in ultra-compact X-ray binaries obtained in an earlier paper (Arbutina 2011, by taking the effects of super-Eddington accretion rate on the stability of mass transfer into account. It is found that, although the value formally remains the same (under the assumed approximations, for white dwarf masses M2 >~0.1MCh mass ratios are extremely low, implying that the result for Mmax is likely to have little if any practical relevance.

  19. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  20. Multi-rate Poisson tree processes for single-locus species delimitation under maximum likelihood and Markov chain Monte Carlo.

    Science.gov (United States)

    Kapli, P; Lutteropp, S; Zhang, J; Kobert, K; Pavlidis, P; Stamatakis, A; Flouri, T

    2017-06-01

    In recent years, molecular species delimitation has become a routine approach for quantifying and classifying biodiversity. Barcoding methods are of particular importance in large-scale surveys as they promote fast species discovery and biodiversity estimates. Among those, distance-based methods are the most common choice as they scale well with large datasets; however, they are sensitive to similarity threshold parameters and they ignore evolutionary relationships. The recently introduced "Poisson Tree Processes" (PTP) method is a phylogeny-aware approach that does not rely on such thresholds. Yet, two weaknesses of PTP impact its accuracy and practicality when applied to large datasets; it does not account for divergent intraspecific variation and is slow for a large number of sequences. We introduce the multi-rate PTP (mPTP), an improved method that alleviates the theoretical and technical shortcomings of PTP. It incorporates different levels of intraspecific genetic diversity deriving from differences in either the evolutionary history or sampling of each species. Results on empirical data suggest that mPTP is superior to PTP and popular distance-based methods as it, consistently yields more accurate delimitations with respect to the taxonomy (i.e., identifies more taxonomic species, infers species numbers closer to the taxonomy). Moreover, mPTP does not require any similarity threshold as input. The novel dynamic programming algorithm attains a speedup of at least five orders of magnitude compared to PTP, allowing it to delimit species in large (meta-) barcoding data. In addition, Markov Chain Monte Carlo sampling provides a comprehensive evaluation of the inferred delimitation in just a few seconds for millions of steps, independently of tree size. mPTP is implemented in C and is available for download at http://github.com/Pas-Kapli/mptp under the GNU Affero 3 license. A web-service is available at http://mptp.h-its.org . : paschalia.kapli@h-its.org or