WorldWideScience

Sample records for maximum variation sample

  1. Variation of Probable Maximum Precipitation in Brazos River Basin, TX

    Science.gov (United States)

    Bhatia, N.; Singh, V. P.

    2017-12-01

    The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was

  2. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    Science.gov (United States)

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  3. Maximum weight of greenhouse effect to global temperature variation

    International Nuclear Information System (INIS)

    Sun, Xian; Jiang, Chuangye

    2007-01-01

    Full text: The global average temperature has risen by 0.74 0 C since the late 19th century. Many studies have concluded that the observed warming in the last 50 years may be attributed to increasing concentrations of anthropogenic greenhouse gases. But some scientists have a different point of view. Global climate change is affected not only by anthropogenic activities, but also constraints in climate system natural factors. How much is the influencing weight of C02's greenhouse effects to the global temperature variation? Does global climate continue warming or decreasing in the next 20 years? They are two hot spots in global climate change. The multi-timescales analysis method - Empirical mode decomposition (EMD) is used to diagnose global annual mean air temperature dataset for land surface provided by IPCC and atmospheric content of C02 provided by the Carbon Dioxide Information Analysis Center (CDIAC) during 1881-2002. The results show that: Global temperature variation contains quasi-periodic oscillations on four timescales (3 yr, 6 yr, 20 yr and 60 yr, respectively) and a century-scale warming trend. The variance contribution of IMF1-IMF4 and trend is 17.55%, 11.34%, 6.77%, 24.15% and 40.19%, respectively. The trend and quasi-60 yr oscillation of temperature variation are the most prominent; C02's greenhouse effect on global temperature variation is mainly century-scale trend. The contribution of C02 concentration to global temperature variability is not more than 40.19%, whereas 59.81% contribution to global temperature variation is non-greenhouse effect. Therefore, it is necessary to re-study the dominant factors that induce the global climate change; It has been noticed that on the periods of 20 yr and 60 yr oscillation, the global temperature is beginning to decreased in the next 20 years. If the present C02 concentration is maintained, the greenhouse effect will be too small to countercheck the natural variation in global climate cooling in the next 20

  4. A general maximum entropy framework for thermodynamic variational principles

    International Nuclear Information System (INIS)

    Dewar, Roderick C.

    2014-01-01

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law

  5. A general maximum entropy framework for thermodynamic variational principles

    Energy Technology Data Exchange (ETDEWEB)

    Dewar, Roderick C., E-mail: roderick.dewar@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.

  6. Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation

    Directory of Open Access Journals (Sweden)

    Su Yeon Oh

    2006-06-01

    Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.

  7. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  8. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  9. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  10. Intraspecific Variation in Maximum Ingested Food Size and Body Mass in Varecia rubra and Propithecus coquereli

    Directory of Open Access Journals (Sweden)

    Adam Hartstone-Rose

    2011-01-01

    Full Text Available In a recent study, we quantified the scaling of ingested food size (Vb—the maximum size at which an animal consistently ingests food whole—and found that Vb scaled isometrically between species of captive strepsirrhines. The current study examines the relationship between Vb and body size within species with a focus on the frugivorous Varecia rubra and the folivorous Propithecus coquereli. We found no overlap in Vb between the species (all V. rubra ingested larger pieces of food relative to those eaten by P. coquereli, and least-squares regression of Vb and three different measures of body mass showed no scaling relationship within each species. We believe that this lack of relationship results from the relatively narrow intraspecific body size variation and seemingly patternless individual variation in Vb within species and take this study as further evidence that general scaling questions are best examined interspecifically rather than intraspecifically.

  11. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  12. On the maximum and minimum of two modified Gamma-Gamma variates with applications

    KAUST Repository

    Al-Quwaiee, Hessa

    2014-04-01

    In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.

  13. Perspective: Maximum caliber is a general variational principle for dynamical systems.

    Science.gov (United States)

    Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A

    2018-01-07

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  14. Perspective: Maximum caliber is a general variational principle for dynamical systems

    Science.gov (United States)

    Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.

    2018-01-01

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  15. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  16. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  17. Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael

    2016-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.

  18. Impacts of Land Cover and Seasonal Variation on Maximum Air Temperature Estimation Using MODIS Imagery

    Directory of Open Access Journals (Sweden)

    Yulin Cai

    2017-03-01

    Full Text Available Daily maximum surface air temperature (Tamax is a crucial factor for understanding complex land surface processes under rapid climate change. Remote detection of Tamax has widely relied on the empirical relationship between air temperature and land surface temperature (LST, a product derived from remote sensing. However, little is known about how such a relationship is affected by the high heterogeneity in landscapes and dynamics in seasonality. This study aims to advance our understanding of the roles of land cover and seasonal variation in the estimation of Tamax using the MODIS (Moderate Resolution Imaging Spectroradiometer LST product. We developed statistical models to link Tamax and LST in the middle and lower reaches of the Yangtze River in China for five major land-cover types (i.e., forest, shrub, water, impervious surface, cropland, and grassland and two seasons (i.e., growing season and non-growing season. Results show that the performance of modeling the Tamax-LST relationship was highly dependent on land cover and seasonal variation. Estimating Tamax over grasslands and water bodies achieved superior performance; while uncertainties were high over forested lands that contained extensive heterogeneity in species types, plant structure, and topography. We further found that all the land-cover specific models developed for the plant non-growing season outperformed the corresponding models developed for the growing season. Discrepancies in model performance mainly occurred in the vegetated areas (forest, cropland, and shrub, suggesting an important role of plant phenology in defining the statistical relationship between Tamax and LST. For impervious surfaces, the challenge of capturing the high spatial heterogeneity in urban settings using the low-resolution MODIS data made Tamax estimation a difficult task, which was especially true in the growing season.

  19. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  20. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  1. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  2. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan

    2016-07-01

    Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return

  4. Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  5. Intrapopulational body size variation and cranial capacity variation in Middle Pleistocene humans: the Sima de los Huesos sample (Sierra de Atapuerca, Spain).

    Science.gov (United States)

    Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I

    1998-05-01

    A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.

  6. Evaluation of regulatory variation and theoretical health risk for pesticide maximum residue limits in food.

    Science.gov (United States)

    Li, Zijian

    2018-08-01

    To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. The Hengill geothermal area, Iceland: Variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G. R.

    1995-04-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area S. Iceland, a dominantly basaltic area. The likely strain rate calculated from thermal and tectonic considerations is 10 -15 s -1, and temperature measurements from four drill sites within the area indicate average, near-surface geothermal gradients of up to 150 °C km -1 throughout the upper 2 km. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ± 50 °C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes located highly accurately by performing a simultaneous inversion for three-dimensional structure and hypocentral parameters. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. Beneath the high-temperature part of the geothermal area, the maximum depth of earthquakes may be as shallow as 4 km. The geothermal gradient below drilling depths in various parts of the area ranges from 84 ± 9 °Ckm -1 within the low-temperature geothermal area of the transform zone to 138 ± 15 °Ckm -1 below the centre of the high-temperature geothermal area. Shallow maximum depths of earthquakes and therefore high average geothermal gradients tend to correlate with the intensity of the geothermal area and not with the location of the currently active spreading axis.

  8. The Hengill geothermal area, Iceland: variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G.R.

    1995-01-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area, S. Iceland, a dominantly basaltic area. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ?? 50??C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. -from Author

  9. A Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Parrinello, Michele

    2015-03-01

    The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.

  10. Brief communication: Is variation in the cranial capacity of the Dmanisi sample too high to be from a single species?

    Science.gov (United States)

    Lee, Sang-Hee

    2005-07-01

    This study uses data resampling to test the null hypothesis that the degree of variation in the cranial capacity of the Dmanisi hominid sample is within the range variation of a single species. The statistical significance of the variation in the Dmanisi sample is examined using simulated distributions based on comparative samples of modern humans, chimpanzees, and gorillas. Results show that it is unlikely to find the maximum difference observed in the Dmanisi sample in distributions of female-female pairs from comparative single-species samples. Given that two sexes are represented, the difference in the Dmanisi sample is not enough to reject the null hypothesis of a single species. Results of this study suggest no compelling reason to invoke multiple taxa to explain variation in the cranial capacity of the Dmanisi hominids. (c) 2004 Wiley-Liss, Inc

  11. Electron density variations in the F2 layer maximum during solar activity cycle

    International Nuclear Information System (INIS)

    Besprozvannaya, A.S.; Kozina, P.E.; AN Kazakhskoj SSR, Alma-Ata. Sektor Ionosfery)

    1988-01-01

    R value, characterizing for F2 relation of hourly median values in solar activity minimum and maximum, is calculated by average monthly values of F2 layer critical frequencies for June, October and December 1958 and 1964. R latitudinal-temporal distributions are plotted for different seasons according to the data from the north hemisphere west and east stations, placed within the Φ'=35-70deg latitudes interval. The following peculiarities of F2 lyer ionization relation with solar activity are pointed out. There are day-time hours, they are - winter one characterized by the gain rate increase with the widths increase, and summer one, realizing the opposite regularity. In night-time hours R value is characterized by the abnormally low values (∼ 1.2) at the latitudes to the south of the ionospheric through and to the pole from it. For all three seasons during 24 hours the periods with ionization gain maximal rate, which occur at nights in summer time and in the hours after the sunset - in winter and equinoctial months, are observed. The quantitative explanation of the peculiarities detected concerning the to-day concepts on F2 layer formation mechanisms is given

  12. The Sidereal Time Variations of the Lorentz Force and Maximum Attainable Speed of Electrons

    Science.gov (United States)

    Nowak, Gabriel; Wojtsekhowski, Bogdan; Roblin, Yves; Schmookler, Barak

    2016-09-01

    The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab produces electrons that orbit through a known magnetic system. The electron beam's momentum can be determined through the radius of the beam's orbit. This project compares the beam orbit's radius while travelling in a transverse magnetic field with theoretical predictions from special relativity, which predict a constant beam orbit radius. Variations in the beam orbit's radius are found by comparing the beam's momentum entering and exiting a magnetic arc. Beam position monitors (BPMs) provide the information needed to calculate the beam momentum. Multiple BPM's are included in the analysis and fitted using the method of least squares to decrease statistical uncertainty. Preliminary results from data collected over a 24 hour period show that the relative momentum change was less than 10-4. Further study will be conducted including larger time spans and stricter cuts applied to the BPM data. The data from this analysis will be used in a larger experiment attempting to verify special relativity. While the project is not traditionally nuclear physics, it involves the same technology (the CEBAF accelerator) and the same methods (ROOT) as a nuclear physics experiment. DOE SULI Program.

  13. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Science.gov (United States)

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  14. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  15. Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-12-13

    In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

  16. ANALYSIS OF THE STATISTICAL BEHAVIOUR OF DAILY MAXIMUM AND MONTHLY AVERAGE RAINFALL ALONG WITH RAINY DAYS VARIATION IN SYLHET, BANGLADESH

    Directory of Open Access Journals (Sweden)

    G. M. J. HASAN

    2014-10-01

    Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.

  17. Variation in rank abundance replicate samples and impact of clustering

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    Calculating a single-sample rank abundance curve by using the negative-binomial distribution provides a way to investigate the variability within rank abundance replicate samples and yields a measure of the degree of heterogeneity of the sampled community. The calculation of the single-sample rank

  18. Aspects of Students' Reasoning about Variation in Empirical Sampling Distributions

    Science.gov (United States)

    Noll, Jennifer; Shaughnessy, J. Michael

    2012-01-01

    Sampling tasks and sampling distributions provide a fertile realm for investigating students' conceptions of variability. A project-designed teaching episode on samples and sampling distributions was team-taught in 6 research classrooms (2 middle school and 4 high school) by the investigators and regular classroom mathematics teachers. Data…

  19. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  20. Sampling Polya-Gamma random variates: alternate and approximate techniques

    OpenAIRE

    Windle, Jesse; Polson, Nicholas G.; Scott, James G.

    2014-01-01

    Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.

  1. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  2. Particulate organic nitrates: Sampling and night/day variation

    DEFF Research Database (Denmark)

    Nielsen, T.; Platz, J.; Granby, K.

    1998-01-01

    Atmospheric day and night concentrations of particulate organic nitrates (PON) and several other air pollutants were measured in the summer 1995 over an open-land area in Denmark. The sampling of PON was evaluated comparing 24 h samples with two sets of 12 h samples. These results indicate...... that the observed low contribution of PON to NO, is real and not the result of an extensive loss during the sampling. Empirical relationships between the vapour pressure and chemical formula of organic compounds were established in order to evaluate the gas/particle distribution of organic nitrates. A positive...

  3. Releasable activity and maximum permissible leakage rate within a transport cask of Tehran Research Reactor fuel samples

    Directory of Open Access Journals (Sweden)

    Rezaeian Mahdi

    2015-01-01

    Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.

  4. Temporal and spatial variation of maximum wind speed days during the past 20 years in major cities of Xinjiang

    Science.gov (United States)

    Baidourela, Aliya; Jing, Zhen; Zhayimu, Kahaer; Abulaiti, Adili; Ubuli, Hakezi

    2018-04-01

    Wind erosion and sandstorms occur in the neighborhood of exposed dust sources. Wind erosion and desertification increase the frequency of dust storms, deteriorate air quality, and damage the ecological environment and agricultural production. The Xinjiang region has a relatively fragile ecological environment. Therefore, the study of the characteristics of maximum wind speed and wind direction in this region is of great significance to disaster prevention and mitigation, the management of activated dunes, and the sustainable development of the region. Based on the latest data of 71 sites in Xinjiang, this study explores the temporal evolution and spatial distribution of maximum wind speed in Xinjiang from 1993 to 2013, and highlights the distribution of annual and monthly maximum wind speed and the characteristics of wind direction in Xinjiang. Between 1993 and 2013, Ulugchat County exhibited the highest number of days with the maximum wind speed (> 17 m/s), while Wutian exhibited the lowest number. In Xinjiang, 1999 showed the highest number of maximum wind speed days (257 days), while 2013 showed the lowest number (69 days). Spring and summer wind speeds were greater than those in autumn and winter. There were obvious differences in the direction of maximum wind speed in major cities and counties of Xinjiang. East of the Tianshan Mountains, maximum wind speeds are mainly directed southeast and northeast. North and south of the Tianshan Mountains, they are mainly directed northwest and northeast, while west of the Tianshan Mountains, they are mainly directed southeast and northwest.

  5. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  6. Study of the variation of maximum beam size with quadrupole gradient in the FMIT drift tube linac

    International Nuclear Information System (INIS)

    Boicourt, G.P.; Jameson, R.A.

    1981-01-01

    The sensitivity of maximum beam size to input mismatch is studied as a function of quadrupole gradient in a short, high-current, drift-tube linac (DTL), for two presriptions: constant phase advance with constant filling factor; and constant strength with constant-length quads. Numerical study using PARMILA shows that the choice of quadrupole strength that minimizes the maximum transverse size of the matched beam through subsequent cells of the linac tends to be most sensitive to input mismatch. However, gradients exist nearby that result in almost-as-small beams over a suitably broad range of mismatch. The study was used to choose the initial gradient for the DTL portion of the Fusion Material Irradiation Test (FMIT) linac. The matching required across quad groups is also discussed

  7. Periodic variations of Auger energy maximum distribution following He2+ + H2 collisions: A complete analogy with photon interferences

    International Nuclear Information System (INIS)

    Cholet, M.; Minerbe, F.; Oliviero, G.; Pestel, V.; Frémont, F.

    2014-01-01

    Highlights: • Young type interferences with electrons are revisited. • Oscillations in the angular distribution of the energy maximum of Auger spectra are evidenced. • Model calculations are in good agreement with the experimental result. • The position of the Auger spectra oscillates in counterphase with the total intensity. - Abstract: In this article, we present experimental evidence of a particular electron-interference phenomenon. The electrons are provided by autoionization of 2l2l′ doubly excited He atoms following the capture of H 2 electrons by a slow He 2+ incoming ion. We observe that the position of the energy maximum of the Auger structures oscillates with the detection angle. Calculation based on a simple model that includes interferences clearly shows that the present oscillations are due to Young-type interferences caused by electrons scattering on both H + centers

  8. Standardised Resting Time Prior to Blood Sampling and Diurnal Variation Associated with Risk of Patient Misclassification

    DEFF Research Database (Denmark)

    Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry

    2015-01-01

    .9×10-7) and sodium (p = 8.7×10-16). Only TSH and albumin were clinically significantly influenced by diurnal variation. Resting time had no clinically significant effect. CONCLUSIONS: We found no need for resting 15 minutes prior to blood sampling. However, diurnal variation was found to have a significant......BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...

  9. High Levels of Sample-to-Sample Variation Confound Data Analysis for Non-Invasive Prenatal Screening of Fetal Microdeletions.

    Directory of Open Access Journals (Sweden)

    Tianjiao Chu

    Full Text Available Our goal was to test the hypothesis that inter-individual genomic copy number variation in control samples is a confounding factor in the non-invasive prenatal detection of fetal microdeletions via the sequence-based analysis of maternal plasma DNA. The database of genomic variants (DGV was used to determine the "Genomic Variants Frequency" (GVF for each 50kb region in the human genome. Whole genome sequencing of fifteen karyotypically normal maternal plasma and six CVS DNA controls samples was performed. The coefficient of variation of relative read counts (cv.RTC for these samples was determined for each 50kb region. Maternal plasma from two pregnancies affected with a chromosome 5p microdeletion was also sequenced, and analyzed using the GCREM algorithm. We found strong correlation between high variance in read counts and GVF amongst controls. Consequently we were unable to confirm the presence of the microdeletion via sequencing of maternal plasma samples obtained from two sequential affected pregnancies. Caution should be exercised when performing NIPT for microdeletions. It is vital to develop our understanding of the factors that impact the sensitivity and specificity of these approaches. In particular, benign copy number variation amongst controls is a major confounder, and their effects should be corrected bioinformatically.

  10. Detection of the Thickness Variation of a Stainless Steel sample using Pulsed Eddy Current

    International Nuclear Information System (INIS)

    Cheong, Y. M.; Angani, C. S.; Park, D. G.; Jhong, H. K.; Kim, G. D.; Kim, C. G.

    2008-01-01

    The Pulsed Eddy Current (PEC) system has been developed for the detection of thickness variation of stainless steel. The sample was machined as step configuration using stainless steel for thickness variation from 1mm to 5mm step by step. The LabView computer program was developed to display the variation in the amplitude of the detected pulse by scanning the PECT probe on the flat side of the sample. The pickup Sensor measures the effective magnetic field on the sample, which is the sum of the incident field and the field reflected by the specimen due to the induced eddy currents in the sample. We use the hall sensor for the detection. Usage of hall sensor instead of coil as a field detector improves the detectability and special resolution. This technology can be used in detection of local wall thinning of the pipeline of nuclear power plant

  11. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Synchronized Pulsed dc - dc Converter as Maximum Power Position Tracker with Wide Load and Insolation Variation for Stand Alone PV System

    International Nuclear Information System (INIS)

    Hardik, P. Desai; Ranjan Maheshwari

    2011-01-01

    This paper investigates the interest focused on employing parallel connected dc-dc converter with high tracking effectiveness under wide variation in environmental conditions (Insolation) and wide load variation. dc-dc converter is an essential part of the stand alone PV system. Paper also presents an approach on how duty cycle for maximum power position (MPP) is adjusted by taking care of varying load conditions and without iterative steps. Synchronized PWM pulses are employed for the converter. High tracking efficiency is achieved with continuous input and inductor current. In this approach, the converter can he utilized in buck as well in boost mode. The PV system simulation was verified and experimental results were in agreement to the presented scheme. (authors)

  13. Weekday variation in triglyceride concentrations in 1.8 million blood samples

    DEFF Research Database (Denmark)

    Jaskolowski, Jörn; Ritz, Christian; Sjödin, Anders Mikael

    2017-01-01

    BACKGROUND: Triglyceride (TG) concentration is used as a marker of cardio-metabolic risk. However, diurnal and possibly weekday variation exists in TG concentrations. OBJECTIVE: To investigate weekday variation in TG concentrations among 1.8 million blood samples drawn between 2008 and 2015 from...... variations in TG concentrations were recorded for out-patients between the age of 9 to 26 years, with up to 20% higher values on Mondays compared to Fridays (all PTriglyceride concentrations were highest after the weekend and gradually declined during the week. We suggest that unhealthy...

  14. A novel quantitative approach for eliminating sample-to-sample variation using a hue saturation value analysis program.

    Science.gov (United States)

    Yabusaki, Katsumi; Faits, Tyler; McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena

    2014-01-01

    As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation.

  15. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin.

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-02-02

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.

  16. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-01-01

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868

  17. Magnitude of 14C/12C variations based on archaeological samples

    International Nuclear Information System (INIS)

    Kusumgar, S.; Agrawal, D.P.

    1977-01-01

    The magnitude of 14 C/ 12 C variations in the period A.D. 5O0 to 200 B.C. and 370 B.C. to 2900 B.C. is discussed. The 14 C dates of well-dated archaeological samples from India and Egypt do not show any significant divergence from the historical ages. On the other hand, the corrections based on dendrochronological samples show marked deviations for the same time period. A plea is, therefore, made to study old tree samples from Anatolia and Irish bogs and archaeological samples from west Asia to arrive at a more realistic calibration curve. (author)

  18. Variation in the diversity and richness of parasitoid wasps based on sampling effort.

    Science.gov (United States)

    Saunders, Thomas E; Ward, Darren F

    2018-01-01

    Parasitoid wasps are a mega-diverse, ecologically dominant, but poorly studied component of global biodiversity. In order to maximise the efficiency and reduce the cost of their collection, the application of optimal sampling techniques is necessary. Two sites in Auckland, New Zealand were sampled intensively to determine the relationship between sampling effort and observed species richness of parasitoid wasps from the family Ichneumonidae. Twenty traps were deployed at each site at three different times over the austral summer period, resulting in a total sampling effort of 840 Malaise-trap-days. Rarefaction techniques and non-parametric estimators were used to predict species richness and to evaluate the variation and completeness of sampling. Despite an intensive Malaise-trapping regime over the summer period, no asymptote of species richness was reached. At best, sampling captured two-thirds of parasitoid wasp species present. The estimated total number of species present depended on the month of sampling and the statistical estimator used. Consequently, the use of fewer traps would have caught only a small proportion of all species (one trap 7-21%; two traps 13-32%), and many traps contributed little to the overall number of individuals caught. However, variation in the catch of individual Malaise traps was not explained by seasonal turnover of species, vegetation or environmental conditions surrounding the trap, or distance of traps to one another. Overall the results demonstrate that even with an intense sampling effort the community is incompletely sampled. The use of only a few traps and/or for very short periods severely limits the estimates of richness because (i) fewer individuals are caught leading to a greater number of singletons; and (ii) the considerable variation of individual traps means some traps will contribute few or no individuals. Understanding how sampling effort affects the richness and diversity of parasitoid wasps is a useful

  19. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency

    Directory of Open Access Journals (Sweden)

    Sérgio Luiz Gomes Antunes

    2012-03-01

    Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  20. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency.

    Science.gov (United States)

    Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes

    2012-03-01

    Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  1. On the Maximum and Minimum of Double Generalized Gamma Variates with Applications to the Performance of Free-space Optical Communication Systems

    KAUST Repository

    Al-Quwaiee, Hessa; Ansari, Imran Shafique; Alouini, Mohamed-Slim

    2016-01-01

    In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.

  2. On the Maximum and Minimum of Double Generalized Gamma Variates with Applications to the Performance of Free-space Optical Communication Systems

    KAUST Repository

    Al-Quwaiee, Hessa

    2016-01-07

    In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.

  3. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  4. Bespoke Bias for Obtaining Free Energy Differences within Variationally Enhanced Sampling.

    Science.gov (United States)

    McCarty, James; Valsson, Omar; Parrinello, Michele

    2016-05-10

    Obtaining efficient sampling of multiple metastable states through molecular dynamics and hence determining free energy differences is central for understanding many important phenomena. Here we present a new biasing strategy, which employs the recent variationally enhanced sampling approach (Valsson and Parrinello Phys. Rev. Lett. 2014, 113, 090601). The bias is constructed from an intuitive model of the local free energy surface describing fluctuations around metastable minima and depends on only a few parameters which are determined variationally such that efficient sampling between states is obtained. The bias constructed in this manner largely reduces the need of finding a set of collective variables that completely spans the conformational space of interest, as they only need to be a locally valid descriptor of the system about its local minimum. We introduce the method and demonstrate its power on two representative examples.

  5. Contributions from the data samples in NOC technique on the extracting of the Sq variation

    Science.gov (United States)

    Wu, Yingyan; Xu, Wenyao

    2015-04-01

    The solar quiet daily variation, Sq, a rather regular variation is usually observed at mid-low latitudes on magnetic quiet days or less-disturbed days. It is mainly resulted from the dynamo currents in the ionospheric E region, which are driven by the atmospheric tidal wind and different processes and flow as two current whorls in each of the northern and southern hemispheres[1]. The Sq exhibits a conspicuous day-to-day (DTD) variability in daily range (or strength), shape (or phase) and its current focus. This variability is mainly attributed to changes in the ionospheric conductivity and tidal winds, varying with solar radiation and ionospheric conditions. Furthermore, it presents a seasonal variation and solar cycle variation[2-4]. In generally, Sq is expressed with the average value of the five international magnetic quiet days. Using data from global magnetic stations, equivalent current system of daily variation can be constructed to reveal characteristics of the currents[5]. In addition, using the differences of H component at two stations on north and south side of the Sq currents of focus, Sq is extracted much better[6]. Recently, the method of Natural Orthoganal Components (NOC) is used to decompose the magnetic daily variation and express it as the summation of eigenmodes, and indicate the first NOC eigenmode as the solar quiet daily variation, the second as the disturbance daily variation[7-9]. As we know, the NOC technique can help reveal simpler patterns within a complex set of variables, without designed basic-functions such as FFT technique. But the physical explanation of the NOC eigenmodes is greatly depends on the number of data samples and data regular-quality. Using the NOC method, we focus our present study on the analysis of the hourly means of the H component at BMT observatory in China from 2001 to 2008. The contributions of the number and the regular-quality of the data samples on which eigenmode corresponds to the Sq are analyzed, by

  6. Variation of Maximum Tree Height and Annual Shoot Growth of Smith Fir at Various Elevations in the Sygera Mountains, Southeastern Tibetan Plateau

    Science.gov (United States)

    Wang, Yafeng; Čufar, Katarina; Eckstein, Dieter; Liang, Eryuan

    2012-01-01

    Little is known about tree height and height growth (as annual shoot elongation of the apical part of vertical stems) of coniferous trees growing at various altitudes on the Tibetan Plateau, which provides a high-elevation natural platform for assessing tree growth performance in relation to future climate change. We here investigated the variation of maximum tree height and annual height increment of Smith fir (Abies georgei var. smithii) in seven forest plots (30 m×40 m) along two altitudinal transects between 3,800 m and 4,200/4,390 m above sea level (a.s.l.) in the Sygera Mountains, southeastern Tibetan Plateau. Four plots were located on north-facing slopes and three plots on southeast-facing slopes. At each site, annual shoot growth was obtained by measuring the distance between successive terminal bud scars along the main stem of 25 trees that were between 2 and 4 m high. Maximum/mean tree height and mean annual height increment of Smith fir decreased with increasing altitude up to the tree line, indicative of a stress gradient (the dominant temperature gradient) along the altitudinal transect. Above-average mean minimum summer (particularly July) temperatures affected height increment positively, whereas precipitation had no significant effect on shoot growth. The time series of annual height increments of Smith fir can be used for the reconstruction of past climate on the southeastern Tibetan Plateau. In addition, it can be expected that the rising summer temperatures observed in the recent past and anticipated for the future will enhance Smith fir's growth throughout its altitudinal distribution range. PMID:22396738

  7. Phenotypic variation in California populations of valley oak (Quercus lobata Née) sampled along elevational gradients

    Science.gov (United States)

    Ana L. Albarrán-Lara; Jessica W. Wright; Paul F. Gugger; Annette Delfino-Mix; Juan Manuel Peñaloza-Ramírez; Victoria L. Sork

    2015-01-01

    California oaks exhibit tremendous phenotypic variation throughout their range. This variation reflects phenotypic plasticity in tree response to local environmental conditions as well as genetic differences underlying those phenotypes. In this study, we analyze phenotypic variation in leaf traits for valley oak adults sampled along three elevational transects and in...

  8. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  9. Virtual sampling in variational processing of Monte Carlo simulation in a deep neutron penetration problem

    International Nuclear Information System (INIS)

    Allagi, Mabruk O.; Lewins, Jeffery D.

    1999-01-01

    In a further study of virtually processed Monte Carlo estimates in neutron transport, a shielding problem has been studied. The use of virtual sampling to estimate the importance function at a certain point in the phase space depends on the presence of neutrons from the real source at that point. But in deep penetration problems, not many neutrons will reach regions far away from the source. In order to overcome this problem, two suggestions are considered: (1) virtual sampling is used as far as the real neutrons can reach, then fictitious sampling is introduced for the remaining regions, distributed in all the regions, or (2) only one fictitious source is placed where the real neutrons almost terminate and then virtual sampling is used in the same way as for the real source. Variational processing is again found to improve the Monte Carlo estimates, being best when using one fictitious source in the far regions with virtual sampling (option 2). When fictitious sources are used to estimate the importances in regions far away from the source, some optimization has to be performed for the proportion of fictitious to real sources, weighted against accuracy and computational costs. It has been found in this study that the optimum number of cells to be treated by fictitious sampling is problem dependent, but as a rule of thumb, fictitious sampling should be employed in regions where the number of neutrons from the real source fall below a specified limit for good statistics

  10. Mendelian breeding units versus standard sampling strategies: mitochondrial DNA variation in southwest Sardinia

    Directory of Open Access Journals (Sweden)

    Daria Sanna

    2011-01-01

    Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.

  11. X-ray speckle contrast variation at a sample-specific absorption edges

    International Nuclear Information System (INIS)

    Retsch, C. C.; Wang, Y.; Frigo, S. P.; Stephenson, G. B.; McNulty, I.

    2000-01-01

    The authors measured static x-ray speckle contrast variation with the incident photon energy across sample-specific absorption edges. They propose that the variation depends strongly on the spectral response function of the monochromator. Speckle techniques have been introduced to the x-ray regime during recent years. Most of these experiments, however, were done at photon energies above 5 keV. They are working on this technique in the 1 to 4 keV range, an energy range that includes many important x-ray absorption edges, e.g., in Al, Si, P, S, the rare-earths, and others. To their knowledge, the effect of absorption edges on speckle contrast has not yet been studied. In this paper, they present their initial measurements and understanding of the observed phenomena

  12. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    Science.gov (United States)

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  13. Experimental investigation on variation of physical properties of coal samples subjected to microwave irradiation

    Science.gov (United States)

    Hu, Guozhong; Yang, Nan; Xu, Guang; Xu, Jialin

    2018-03-01

    The gas drainage rate of low-permeability coal seam is generally less than satisfactory. This leads to the gas disaster of coal mine, and largely restricts the extraction of coalbed methane (CBM), and increases the emission of greenhouse gases in the mining area. Consequently, enhancing the gas drainage rate is an urgent challenge. To solve this problem, a new approach of using microwave irradiation (MWR) as a non-contact physical field excitation method to enhance gas drainage has been attempted. In order to evaluate the feasibility of this method, the methane adsorption, diffusion and penetrability of coal subjected to MWR were experimentally investigated. The variation of methane adsorbed amount, methane diffusion speed and absorption loop for the coal sample before and after MWR were obtained. The findings show that the MWR can change the adsorption property and reduce the methane adsorption capacity of coal. Moreover, the methane diffusion characteristic curves for both the irradiated coal samples and theoriginal coal samples present the same trend. The irradiated coal samples have better methane diffusion ability than the original ones. As the adsorbed methane decreases, the methane diffusion speed increases or remain the same for the sample subjected to MWR. Furthermore, compared to the original coal samples, the area of the absorption loop for irradiated samples increases, especially for the micro-pore and medium-pore stage. This leads to the increase of open pores in the coal, thus improving the gas penetrability of coal. This study provides supports for positive MWR effects on changing the methane adsorption and improving the methane diffusion and the gas penetrability properties of coal samples.

  14. Seasonal Variation, Chemical Composition and Antioxidant Activity of Brazilian Propolis Samples

    Directory of Open Access Journals (Sweden)

    Érica Weinstein Teixeira

    2010-01-01

    Full Text Available Total phenolic contents, antioxidant activity and chemical composition of propolis samples from three localities of Minas Gerais state (southeast Brazil were determined. Total phenolic contents were determined by the Folin–Ciocalteau method, antioxidant activity was evaluated by DPPH, using BHT as reference, and chemical composition was analyzed by GC/MS. Propolis from Itapecerica and Paula Cândido municipalities were found to have high phenolic contents and pronounced antioxidant activity. From these extracts, 40 substances were identified, among them were simple phenylpropanoids, prenylated phenylpropanoids, sesqui- and diterpenoids. Quantitatively, the main constituent of both samples was allyl-3-prenylcinnamic acid. A sample from Virginópolis municipality had no detectable phenolic substances and contained mainly triterpenoids, the main constituents being α- and β-amyrins. Methanolic extracts from Itapecerica and Paula Cândido exhibited pronounced scavenging activity towards DPPH, indistinguishable from BHT activity. However, extracts from Virginópolis sample exhibited no antioxidant activity. Total phenolic substances, GC/MS analyses and antioxidant activity of samples from Itapecerica collected monthly over a period of 1 year revealed considerable variation. No correlation was observed between antioxidant activity and either total phenolic contents or contents of artepillin C and other phenolic substances, as assayed by CG/MS analysis.

  15. Explaining health care expenditure variation: large-sample evidence using linked survey and health administrative data.

    Science.gov (United States)

    Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth

    2013-09-01

    Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Variation in marital quality in a national sample of divorced women.

    Science.gov (United States)

    James, Spencer L

    2015-06-01

    Previous work has compared marital quality between stably married and divorced individuals. Less work has examined the possibility of variation among divorcés in trajectories of marital quality as divorce approaches. This study addressed that hole by first examining whether distinct trajectories of marital quality can be discerned among women whose marriages ended in divorce and, second, the profile of women who experienced each trajectory. Latent class growth analyses with longitudinal data from a nationally representative sample were used to "look backward" from the time of divorce. Although demographic and socioeconomic variables from this national sample did not predict the trajectories well, nearly 66% of divorced women reported relatively high levels of both happiness and communication and either low or moderate levels of conflict. Future research including personality or interactional patterns may lead to theoretical insights about patterns of marital quality in the years leading to divorce. (c) 2015 APA, all rights reserved).

  17. Measurement of the natural variation of 13C/12C isotope ratio in organic samples

    International Nuclear Information System (INIS)

    Ducatti, C.

    1977-01-01

    The isotopic ratio analysis for 13 C/ 12 C by mass spectrometry using a 'Working standard' allows the study of 13 C natural variation in organic material, with a total analytical error of less than 0,2%. Equations were derived in order to determine 13 C/ 12 C and 18 O/ 16 O ratios related to the 'working standard' CENA-std and to the international standard PDB. Isotope ratio values obtained with samples prepared in two different combustion apparatus were compared; also the values obtained preparing samples by acid decomposition of carbonaceous materials were compared with the values obtained in different international laboratories. Utilizing the methodology proposed, several leaves collected at different heights of different vegetal species, found 'inside' and 'outside' of the Ducke Forest Reserve, located in the Amazon region, are analysed. It is found that the 13 C natural variation depends upon metabolic process and environmental factors, both being factors which may be qualified as parcial influences on the CO 2 cycle in the forest. (author) [pt

  18. Statistical issues in reporting quality data: small samples and casemix variation.

    Science.gov (United States)

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  19. Accounting for medical variation: the case of prescribing activity in a New Zealand general practice sample.

    Science.gov (United States)

    Davis, P B; Yee, R L; Millar, J

    1994-08-01

    Medical practice variation is extensive and well documented, particularly for surgical interventions, and raises important questions for health policy. To date, however, little work has been carried out on interpractitioner variation in prescribing activity in the primary care setting. An analytical model of medical variation is derived from the literature and relevant indicators are identified from a study of New Zealand general practice. The data are based on nearly 9,500 completed patient encounter records drawn from over a hundred practitioners in the Waikato region of the North Island, New Zealand. The data set represents a 1% sample of all weekday general practice office encounters in the Hamilton Health District recorded over a 12-month period. Overall levels of prescribing, and the distribution of drug mentions across diagnostic groupings, are broadly comparable to results drawn from international benchmark data. A multivariate analysis is carried out on seven measures of activity in the areas of prescribing volume, script detail, and therapeutic choice. The analysis indicates that patient, practitioner and practice attributes exert little systematic influence on the prescribing task. The principal influences are diagnosis, followed by practitioner identity. The pattern of findings suggests also that the prescribing task cannot be viewed as an undifferentiated activity. It is more usefully considered as a process of decision-making in which 'core' judgements--such as the decision to prescribe and the choice of drug--are highly predictable and strongly influenced by diagnosis, while 'peripheral' features of the task--such as choosing a combination drug or prescribing generically--are less determinate and more subject to the exercise of clinical discretion.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Characterization of PDMS samples with variation of its synthesis parameters for tunable optics applications

    Science.gov (United States)

    Marquez-Garcia, Josimar; Cruz-Félix, Angel S.; Santiago-Alvarado, Agustin; González-García, Jorge

    2017-09-01

    Nowadays the elastomer known as polydimethylsiloxane (PDMS, Sylgard 184), due to its physical properties, low cost and easy handle, have become a frequently used material for the elaboration of optical components such as: variable focal length liquid lenses, optical waveguides, solid elastic lenses, etc. In recent years, we have been working in the characterization of this material for applications in visual sciences; in this work, we describe the elaboration of PDMSmade samples, also, we present physical and optical properties of the samples by varying its synthesis parameters such as base: curing agent ratio, and both, curing time and temperature. In the case of mechanical properties, tensile and compression tests were carried out through a universal testing machine to obtain the respective stress-strain curves, and to obtain information regarding its optical properties, UV-vis spectroscopy is applied to the samples to obtain transmittance and absorbance curves. Index of refraction variation was obtained through an Abbe refractometer. Results from the characterization will determine the proper synthesis parameters for the elaboration of tunable refractive surfaces for potential applications in robotics.

  1. Coarse graining from variationally enhanced sampling applied to the Ginzburg–Landau model

    Science.gov (United States)

    Invernizzi, Michele; Valsson, Omar; Parrinello, Michele

    2017-01-01

    A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg–Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard–Jones fluid in the region close to the liquid–vapor critical point. The procedure is general and can be adapted to other coarse-grained models. PMID:28292890

  2. Coarse graining from variationally enhanced sampling applied to the Ginzburg-Landau model

    Science.gov (United States)

    Invernizzi, Michele; Valsson, Omar; Parrinello, Michele

    2017-03-01

    A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg-Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard-Jones fluid in the region close to the liquid-vapor critical point. The procedure is general and can be adapted to other coarse-grained models.

  3. An antithetic variate to facilitate upper-stem height measurements for critical height sampling with importance sampling

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2013-01-01

    Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...

  4. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  5. The contribution of simple random sampling to observed variations in faecal egg counts.

    Science.gov (United States)

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Variations among animals when estimating the undegradable fraction of fiber in forage samples

    Directory of Open Access Journals (Sweden)

    Cláudia Batista Sampaio

    2014-10-01

    Full Text Available The objective of this study was to assess the variability among animals regarding the critical time to estimate the undegradable fraction of fiber (ct using an in situ incubation procedure. Five rumenfistulated Nellore steers were used to estimate the degradation profile of fiber. Animals were fed a standard diet with an 80:20 forage:concentrate ratio. Sugarcane, signal grass hay, corn silage and fresh elephant grass samples were assessed. Samples were put in F57 Ankom® bags and were incubated in the rumens of the animals for 0, 6, 12, 18, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 and 312 hours. The degradation profiles were interpreted using a mixed non-linear model in which a random effect was associated with the degradation rate. For sugarcane, signal grass hay and corn silage, there were no significant variations among animals regarding the fractional degradation rate of neutral and acid detergent fiber; consequently, the ct required to estimate the undegradable fiber fraction did not vary among animals for those forages. However, a significant variability among animals was found for the fresh elephant grass. The results seem to suggest that the variability among animals regarding the degradation rate of fibrous components can be significant.

  7. Variation of the 18O/16O ratio in water samples from branches

    International Nuclear Information System (INIS)

    Foerstel, H.; Huetzen, H.

    1979-06-01

    The studies of the water turnover of plants may use the labelling of water by its natural variation of the 18 O/ 16 O ratio. The basic value of such a study is the isotope ratio in soil water, which is represented by the 18 O/ 16 O ratio in water samples from stem and branches, too. During the water transport from the soil water reservoir to the leaves of trees, no fractionation of the oxygen isotopes occurs. The oxygen isotope ratio within a single twig varies about +- 0 / 00 (variation given as standard deviation of the delta-values), within the stem of a large tree about +- 2 0 / 00 . The results of water from stems of different trees at the site of the Nuclear Research Center Juelich scatter about +- 1 0 / 00 . The delta-values from a larger area (Rur valley-Eifel hills-Mosel valley), which were collected in October 1978 during the end of the vegetation period, showed a standard deviation between +- 2.2 (Rur valley) and +- 3.6 0 / 00 (Eifel hills). The 18 O/ 16 O-delta-values of a beech wood from Juelich site are in the range of - 7.3 and - 10.1 0 / 00 (mean local precipitation 1974 - 1977: - 7.4 0 / 00 ). At the hill site near Cologne (Bergisches Land, late September 1978) we observed an oxygen isotope ratio of - 9.1 0 / 00 (groundwater at the neighbourhood between - 7.6 and 8.7 0 / 00 ). In October 1978 at an area from the Netherlands to the Mosel valley we found delta-values of branch water between - 13.9 (lower Ruhr valley) and - 13.1 (Eifel hills to Mosel valley) in comparison to groundwater samples from the same region: - 7.55 and - 8.39. There was no significant difference between delta-values from various species or locations within this area. Groundwater samples should normally represent the 18 O/ 16 O ratio of local precipitation. The low delta-values of branch water could be due to the rapid uptake of precipitation water of low 18 O content in autumn to the water transport system of plants. (orig.) [de

  8. 222Rn in water: A comparison of two sample collection methods and two sample transport methods, and the determination of temporal variation in North Carolina ground water

    International Nuclear Information System (INIS)

    Hightower, J.H. III

    1994-01-01

    Objectives of this field experiment were: (1) determine whether there was a statistically significant difference between the radon concentrations of samples collected by EPA's standard method, using a syringe, and an alternative, slow-flow method; (2) determine whether there was a statistically significant difference between the measured radon concentrations of samples mailed vs samples not mailed; and (3) determine whether there was a temporal variation of water radon concentration over a 7-month period. The field experiment was conducted at 9 sites, 5 private wells, and 4 public wells, at various locations in North Carolina. Results showed that a syringe is not necessary for sample collection, there was generally no significant radon loss due to mailing samples, and there was statistically significant evidence of temporal variations in water radon concentrations

  9. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...

  10. Analytical and between-subject variation of thrombin generation measured by calibrated automated thrombography on plasma samples.

    Science.gov (United States)

    Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula; Münster, Anna-Marie B; Pedersen, Shona

    2018-05-01

    The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between-subject variation of CAT systematically. Moreover, we assess the application of an internal standard for normalization to diminish variation. 20 healthy volunteers donated one blood sample which was subsequently centrifuged, aliquoted and stored at -80 °C prior to analysis. The analytical variation was determined on eight runs, where plasma from the same seven volunteers was processed in triplicates, and for the between-subject variation, TG analysis was performed on plasma from all 20 volunteers. The trigger reagents used for the TG assays included both PPP reagent containing 5 pM tissue factor (TF) and PPPlow with 1 pM TF. Plasma, drawn from a single donor, was applied to all plates as an internal standard for each TG analysis, which subsequently was used for normalization. The total analytical variation for TG analysis performed with PPPlow reagent is 3-14% and 9-13% for PPP reagent. This variation can be minimally reduced by using an internal standard but mainly for ETP (endogenous thrombin potential). The between-subject variation is higher when using PPPlow than PPP and this variation is considerable higher than the analytical variation. TG has a rather high inherent analytical variation but considerable lower than the between-subject variation when using PPPlow as reagent.

  11. Periodic variations of Auger energy maximum distribution following He{sup 2+} + H{sub 2} collisions: A complete analogy with photon interferences

    Energy Technology Data Exchange (ETDEWEB)

    Cholet, M.; Minerbe, F.; Oliviero, G.; Pestel, V. [Université de Caen, 6 bd du Mal Juin, 14050 Caen Cedex (France); Frémont, F., E-mail: francois.fremont@ensicaen.fr [Centre de Recherche sur les Ions, les Matériaux et la Photonique, Unité Mixte Université de Caen-CEA-CNRS-EnsiCaen, 6 bd du Mal Juin, 14050 Caen Cedex 4 (France)

    2014-08-15

    Highlights: • Young type interferences with electrons are revisited. • Oscillations in the angular distribution of the energy maximum of Auger spectra are evidenced. • Model calculations are in good agreement with the experimental result. • The position of the Auger spectra oscillates in counterphase with the total intensity. - Abstract: In this article, we present experimental evidence of a particular electron-interference phenomenon. The electrons are provided by autoionization of 2l2l′ doubly excited He atoms following the capture of H{sub 2} electrons by a slow He{sup 2+} incoming ion. We observe that the position of the energy maximum of the Auger structures oscillates with the detection angle. Calculation based on a simple model that includes interferences clearly shows that the present oscillations are due to Young-type interferences caused by electrons scattering on both H{sup +} centers.

  12. Variation in orgasm occurrence by sexual orientation in a sample of U.S. singles.

    Science.gov (United States)

    Garcia, Justin R; Lloyd, Elisabeth A; Wallen, Kim; Fisher, Helen E

    2014-11-01

    Despite recent advances in understanding orgasm variation, little is known about ways in which sexual orientation is associated with men's and women's orgasm occurrence. To assess orgasm occurrence during sexual activity across sexual orientation categories. Data were collected by Internet questionnaire from 6,151 men and women (ages 21-65+ years) as part of a nationally representative sample of single individuals in the United States. Analyses were restricted to a subsample of 2,850 singles (1,497 men, 1,353 women) who had experienced sexual activity in the past 12 months. Participants reported their sex/gender, self-identified sexual orientation (heterosexual, gay/lesbian, bisexual), and what percentage of the time they experience orgasm when having sex with a familiar partner. Mean occurrence rate for experiencing orgasm during sexual activity with a familiar partner was 62.9% among single women and 85.1% among single men, which was significantly different (F1,2848  = 370.6, P sexual orientation: heterosexual men 85.5%, gay men 84.7%, bisexual men 77.6% (F2,1494  = 2.67, P = 0.07, η(2)  = 0.004). For women, however, mean occurrence rate of orgasm varied significantly by sexual orientation: heterosexual women 61.6%, lesbian women 74.7%, bisexual women 58.0% (F2,1350  = 10.95, P sexual orientation, have less predictable, more varied orgasm experiences than do men and that for women, but not men, the likelihood of orgasm varies with sexual orientation. These findings demonstrate the need for further investigations into the comparative sexual experiences and sexual health outcomes of sexual minorities. © 2014 International Society for Sexual Medicine.

  13. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    International Nuclear Information System (INIS)

    Scogin, J. H.

    2016-01-01

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  14. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    Energy Technology Data Exchange (ETDEWEB)

    Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-24

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  15. Short communication: Influence of the sampling device on somatic cell count variation in cow milk samples (by official recording)

    International Nuclear Information System (INIS)

    Fouz, R.; Vilar, M.J.; Yus, E.; Sanjuán, M.L.; Diéguez, F.J.

    2016-01-01

    The objective of this study was to investigate the variability in cow´s milk somatic cell counts (SCC) depending on the type of milk meter used by dairy farms for official milk recording. The study was performed in 2011 and 2012 in the major cattle area of Spain. In total, 137,846 lactations of Holstein-Friesian cows were analysed at 1,912 farms. A generalised least squares regression model was used for data analysis. The model showed that the milk meter had a substantial effect on the SCC for individual milk samples obtained for official milk recording. The results suggested an overestimation of the SCC in milk samples from farms that had electronic devices in comparison with farms that used portable devices and underestimation when volumetric meters are used. A weak positive correlation was observed between the SCC and the percentage of fat in individual milk samples. The results underline the importance of considering this variable when using SCC data from milk recording in the dairy herd improvement program or in quality milk programs. (Author)

  16. Short communication: Influence of the sampling device on somatic cell count variation in cow milk samples (by official recording)

    Energy Technology Data Exchange (ETDEWEB)

    Fouz, R.; Vilar, M.J.; Yus, E.; Sanjuán, M.L.; Diéguez, F.J.

    2016-11-01

    The objective of this study was to investigate the variability in cow´s milk somatic cell counts (SCC) depending on the type of milk meter used by dairy farms for official milk recording. The study was performed in 2011 and 2012 in the major cattle area of Spain. In total, 137,846 lactations of Holstein-Friesian cows were analysed at 1,912 farms. A generalised least squares regression model was used for data analysis. The model showed that the milk meter had a substantial effect on the SCC for individual milk samples obtained for official milk recording. The results suggested an overestimation of the SCC in milk samples from farms that had electronic devices in comparison with farms that used portable devices and underestimation when volumetric meters are used. A weak positive correlation was observed between the SCC and the percentage of fat in individual milk samples. The results underline the importance of considering this variable when using SCC data from milk recording in the dairy herd improvement program or in quality milk programs. (Author)

  17. Latitudinal and radial variation of >2 GeV/n protons and alpha-particles at solar maximum: ULYSSES COSPIN/KET and neutron monitor network observations

    Directory of Open Access Journals (Sweden)

    A. V. Belov

    2003-06-01

    Full Text Available Ulysses, launched in October 1990, began its second out-of-ecliptic orbit in September 1997. In 2000/2001 the spacecraft passed from the south to the north polar regions of the Sun in the inner heliosphere. In contrast to the first rapid pole to pole passage in 1994/1995 close to solar minimum, Ulysses experiences now solar maximum conditions. The Kiel Electron Telescope (KET measures also protons and alpha-particles in the energy range from 5 MeV/n to >2 GeV/n. To derive radial and latitudinal gradients for >2 GeV/n protons and alpha-particles, data from the Chicago instrument on board IMP-8 and the neutron monitor network have been used to determine the corresponding time profiles at Earth. We obtain a spatial distribution at solar maximum which differs greatly from the solar minimum distribution. A steady-state approximation, which was characterized by a small radial and significant latitudinal gradient at solar minimum, was interchanged with a highly variable one with a large radial and a small – consistent with zero – latitudinal gradient. A significant deviation from a spherically symmetric cosmic ray distribution following the reversal of the solar magnetic field in 2000/2001 has not been observed yet. A small deviation has only been observed at northern polar regions, showing an excess of particles instead of the expected depression. This indicates that the reconfiguration of the heliospheric magnetic field, caused by the reappearance of the northern polar coronal hole, starts dominating the modulation of galactic cosmic rays already at solar maximum.Key words. Interplanetary physics (cosmic rays; energetic particles – Space plasma physics (charged particle motion and acceleration

  18. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  19. Contribution to the study of maximum levels for liquid radioactive waste disposal into continental and sea water. Treatment of some typical samples

    International Nuclear Information System (INIS)

    Bittel, R.; Mancel, J.

    1968-10-01

    The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [fr

  20. Gardner's Two Children Problems and Variations: Puzzles with Conditional Probability and Sample Spaces

    Science.gov (United States)

    Taylor, Wendy; Stacey, Kaye

    2014-01-01

    This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors…

  1. Multidrug Resistance Among New Tuberculosis Cases Detecting Local Variation Through Lot Quality-assurance Sampling

    NARCIS (Netherlands)

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-01-01

    Background: Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper

  2. Intraspecific variation in aerobic and anaerobic locomotion: gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata) do not exhibit a trade-off between maximum sustained swimming speed and minimum cost of transport

    Science.gov (United States)

    Svendsen, Jon C.; Tirsgaard, Bjørn; Cordero, Gerardo A.; Steffensen, John F.

    2015-01-01

    Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; Ucrit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (Usus) and minimum cost of transport (COTmin); and (4) variation in Usus correlates positively with optimum swimming speed (Uopt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg−1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and

  3. Stratified Sampling to Define Levels of Petrographic Variation in Coal Beds: Examples from Indonesia and New Zealand

    Directory of Open Access Journals (Sweden)

    Tim A. Moore

    2016-01-01

    Full Text Available DOI: 10.17014/ijog.3.1.29-51Stratified sampling of coal seams for petrographic analysis using block samples is a viable alternative to standard methods of channel sampling and particulate pellet mounts. Although petrographic analysis of particulate pellets is employed widely, it is both time consuming and does not allow variation within sampling units to be assessed - an important measure in any study whether it be for paleoenvironmental reconstruction or in obtaining estimates of industrial attributes. Also, samples taken as intact blocks provide additional information, such as texture and botanical affinity that cannot be gained using particulate pellets. Stratified sampling can be employed both on ‘fine’ and ‘coarse’ grained coal units. Fine-grained coals are defined as those coal intervals that do not contain vitrain bands greater than approximately 1 mm in thickness (as measured perpendicular to bedding. In fine-grained coal seams, a reasonable sized block sample (with a polished surface area of ~3 cm2 can be taken that encapsulates the macroscopic variability. However, for coarse-grained coals (vitrain bands >1 mm a different system has to be employed in order to accurately account for the larger particles. Macroscopic point counting of vitrain bands can accurately account for those particles>1 mm within a coal interval. This point counting method is conducted using something as simple as string on a coal face with marked intervals greater than the largest particle expected to be encountered (although new technologies are being developed to capture this type of information digitally. Comparative analyses of particulate pellets and blocks on the same interval show less than 6% variation between the two sample types when blocks are recalculated to include macroscopic counts of vitrain. Therefore even in coarse-grained coals, stratified sampling can be used effectively and representatively.

  4. Sources of pre-analytical variations in yield of DNA extracted from blood samples: analysis of 50,000 DNA samples in EPIC.

    Directory of Open Access Journals (Sweden)

    Elodie Caboux

    Full Text Available The European Prospective Investigation into Cancer and nutrition (EPIC is a long-term, multi-centric prospective study in Europe investigating the relationships between cancer and nutrition. This study has served as a basis for a number of Genome-Wide Association Studies (GWAS and other types of genetic analyses. Over a period of 5 years, 52,256 EPIC DNA samples have been extracted using an automated DNA extraction platform. Here we have evaluated the pre-analytical factors affecting DNA yield, including anthropometric, epidemiological and technical factors such as center of subject recruitment, age, gender, body-mass index, disease case or control status, tobacco consumption, number of aliquots of buffy coat used for DNA extraction, extraction machine or procedure, DNA quantification method, degree of haemolysis and variations in the timing of sample processing. We show that the largest significant variations in DNA yield were observed with degree of haemolysis and with center of subject recruitment. Age, gender, body-mass index, cancer case or control status and tobacco consumption also significantly impacted DNA yield. Feedback from laboratories which have analyzed DNA with different SNP genotyping technologies demonstrate that the vast majority of samples (approximately 88% performed adequately in different types of assays. To our knowledge this study is the largest to date to evaluate the sources of pre-analytical variations in DNA extracted from peripheral leucocytes. The results provide a strong evidence-based rationale for standardized recommendations on blood collection and processing protocols for large-scale genetic studies.

  5. Variation in ebmental quantification by X-ray fluorescence analysis in crystalline materials when applying pressure in sample preparation

    International Nuclear Information System (INIS)

    Macias B, L.R.; Garcia C, R.M.; De Ita de la Torre, A.; Chavez R, A.

    2000-01-01

    In this work making use of the diffraction and fluorescence techniques its were determined the presence of elements in a known compound ZrSiO 4 under different pressure conditions. At preparing the samples it were applied different pressures from 1600 until 350 k N/m 2 and it is detected the apparent variations in concentration in the Zr and Si elements. (Author)

  6. Influence of common preanalytical variations on the metabolic profile of serum samples in biobanks

    International Nuclear Information System (INIS)

    Fliniaux, Ophélie; Gaillard, Gwenaelle; Lion, Antoine; Cailleu, Dominique; Mesnard, François; Betsou, Fotini

    2011-01-01

    A blood pre-centrifugation delay of 24 h at room temperature influenced the proton NMR spectroscopic profiles of human serum. A blood pre-centrifugation delay of 24 h at 4°C did not influence the spectroscopic profile as compared with 4 h delays at either room temperature or 4°C. Five or ten serum freeze–thaw cycles also influenced the proton NMR spectroscopic profiles. Certain common in vitro preanalytical variations occurring in biobanks may impact the metabolic profile of human serum.

  7. Influence of common preanalytical variations on the metabolic profile of serum samples in biobanks

    Energy Technology Data Exchange (ETDEWEB)

    Fliniaux, Ophelie [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Gaillard, Gwenaelle [Biobanque de Picardie (France); Lion, Antoine [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Cailleu, Dominique [Batiment Serres-Transfert, rue de Mai/rue Dallery, Plateforme Analytique (France); Mesnard, Francois, E-mail: francois.mesnard@u-picardie.fr [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Betsou, Fotini [Integrated Biobank of Luxembourg (Luxembourg)

    2011-12-15

    A blood pre-centrifugation delay of 24 h at room temperature influenced the proton NMR spectroscopic profiles of human serum. A blood pre-centrifugation delay of 24 h at 4 Degree-Sign C did not influence the spectroscopic profile as compared with 4 h delays at either room temperature or 4 Degree-Sign C. Five or ten serum freeze-thaw cycles also influenced the proton NMR spectroscopic profiles. Certain common in vitro preanalytical variations occurring in biobanks may impact the metabolic profile of human serum.

  8. Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.

    Science.gov (United States)

    Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P

    2014-01-01

    Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. © 2013 Society for Risk Analysis.

  9. The international Genome sample resource (IGSR): A worldwide collection of genome variation incorporating the 1000 Genomes Project data.

    Science.gov (United States)

    Clarke, Laura; Fairley, Susan; Zheng-Bradley, Xiangqun; Streeter, Ian; Perry, Emily; Lowy, Ernesto; Tassé, Anne-Marie; Flicek, Paul

    2017-01-04

    The International Genome Sample Resource (IGSR; http://www.internationalgenome.org) expands in data type and population diversity the resources from the 1000 Genomes Project. IGSR represents the largest open collection of human variation data and provides easy access to these resources. IGSR was established in 2015 to maintain and extend the 1000 Genomes Project data, which has been widely used as a reference set of human variation and by researchers developing analysis methods. IGSR has mapped all of the 1000 Genomes sequence to the newest human reference (GRCh38), and will release updated variant calls to ensure maximal usefulness of the existing data. IGSR is collecting new structural variation data on the 1000 Genomes samples from long read sequencing and other technologies, and will collect relevant functional data into a single comprehensive resource. IGSR is extending coverage with new populations sequenced by collaborating groups. Here, we present the new data and analysis that IGSR has made available. We have also introduced a new data portal that increases discoverability of our data-previously only browseable through our FTP site-by focusing on particular samples, populations or data sets of interest. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Some regional variations in dietary patterns in a random sample of British adults.

    Science.gov (United States)

    Whichelow, M J; Erzinclioglu, S W; Cox, B D

    1991-05-01

    Comparison was made of the reported frequency of consumption or choice of 30 food items by 8860 adults in the 11 standard regions of Great Britain, with the use of log-linear analysis to allow for the age, sex, social class and smoking habit variations between the regions. The South-East was taken as the base region against which the others were compared. The number of food items for which there were significant differences from the South-East were Scotland 23, North 25, North-West and Yorkshire/Humberside 20, Wales 19, West Midlands 15, East Midlands 10, East Anglia 8, South-West 7 and Greater London 9. Overall the findings confirm a North/South trend in relation to eating habits, even when demographic and smoking-habit variations are taken into account, with the frequent consumption of many fruit and vegetable products being much less common and of several high-fat foods (chips, processed meats and fried food) more common in Scotland, Wales and the northern part of England. In most regions there was a significantly lower frequency of consumption of fresh fruit, fruit juice, 'brown' bread, pasta/rice, poultry, skimmed/semi-skimmed milk, light desserts and nuts, and a higher consumption of red meat, fish and fried food than in the South-East.

  11. Depression and Racial/Ethnic Variations within a Diverse Nontraditional College Sample

    Science.gov (United States)

    Hudson, Richard; Towey, James; Shinar, Ori

    2008-01-01

    The study's objective was to ascertain whether rates of depression were significantly higher for Dominican, Puerto Rican, South and Central American and Jamaican/Haitian students than for African American and White students. The sample consisted of 987 predominantly nontraditional college students. The depression rate for Dominican students was…

  12. Minimizing technical variation during sample preparation prior to label-free quantitative mass spectrometry.

    Science.gov (United States)

    Scheerlinck, E; Dhaenens, M; Van Soom, A; Peelman, L; De Sutter, P; Van Steendam, K; Deforce, D

    2015-12-01

    Sample preparation is the crucial starting point to obtain high-quality mass spectrometry data and can be divided into two main steps in a bottom-up proteomics approach: cell/tissue lysis with or without detergents and a(n) (in-solution) digest comprising denaturation, reduction, alkylation, and digesting of the proteins. Here, some important considerations, among others, are that the reagents used for sample preparation can inhibit the digestion enzyme (e.g., 0.1% sodium dodecyl sulfate [SDS] and 0.5 M guanidine HCl), give rise to ion suppression (e.g., polyethylene glycol [PEG]), be incompatible with liquid chromatography-tandem mass spectrometry (LC-MS/MS) (e.g., SDS), and can induce additional modifications (e.g., urea). Taken together, all of these irreproducible effects are gradually becoming a problem when label-free quantitation of the samples is envisioned such as during the increasingly popular high-definition mass spectrometry (HDMS(E)) and sequential window acquisition of all theoretical fragment ion spectra (SWATH) data-independent acquisition strategies. Here, we describe the detailed validation of a reproducible method with sufficient protein yield for sample preparation without any known LC-MS/MS interfering substances by using 1% sodium deoxycholate (SDC) during both cell lysis and in-solution digest. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Ethnic Variations of Pathways Linking Socioeconomic Status, Parenting, and Preacademic Skills in a Nationally Representative Sample

    Science.gov (United States)

    Iruka, Iheoma U.; Dotterer, Aryn M.; Pungello, Elizabeth P.

    2014-01-01

    Research Findings: Grounded in the investment model and informed by the integrative theory of the study of minority children, this study used the Early Childhood Longitudinal Study-Birth Cohort data set, a nationally representative sample of young children, to investigate whether the association between socioeconomic status (family income and…

  14. Seasonal and temporal variation in release of antibiotics in hospital wastewater: estimation using continuous and grab sampling.

    Science.gov (United States)

    Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J

    2013-01-01

    The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the

  15. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  16. Attitudes to Gun Control in an American Twin Sample: Sex Differences in the Causes of Variation.

    Science.gov (United States)

    Eaves, Lindon J; Silberg, Judy L

    2017-10-01

    The genetic and social causes of individual differences in attitudes to gun control are estimated in a sample of senior male and female twin pairs in the United States. Genetic and environmental parameters were estimated by weighted least squares applied to polychoric correlations for monozygotic (MZ) and dizygotic (DZ) twins of both sexes. The analysis suggests twin similarity for attitudes to gun control in men is entirely genetic while that in women is purely social. Although the volunteer sample is small, the analysis illustrates how the well-tested concepts and methods of genetic epidemiology may be a fertile resource for deepening our scientific understanding of biological and social pathways that affect individual risk to gun violence.

  17. Seasonal variation in physical activity, sedentary behaviour and sleep in a sample of UK adults.

    Science.gov (United States)

    O'Connell, Sophie E; Griffiths, Paula L; Clemes, Stacy A

    2014-01-01

    Physical activity (PA), sedentary behaviour (SB), sleep and diet have all been associated with increased risk for chronic disease. Seasonality is often overlooked as a determinant of these behaviours in adults. Currently, no study has simultaneously monitored these behaviours in UK adults to assess seasonal variation. The present study investigated whether PA, SB, sleep and diet differed over season in UK adults. Forty-six adults (72% female; age = 41.7 ± 14.4 years, BMI = 24.9 ± 4.4 kg/m(2)) completed four 7-day monitoring periods; one during each season of the year. The ActiGraph GT1M was used to monitor PA and SB. Daily sleep diaries monitored time spent in bed (TIB) and total sleep time (TST). The European Prospective Investigation of Cancer (EPIC) food frequency questionnaire (FFQ) assessed diet. Repeated measures ANOVAs were used to identify seasonal differences in behaviours. Light-intensity PA was significantly higher in summer and spring (p diet (p > 0.05). Findings support the concept that health promotion campaigns need to encourage year-round participation in light intensity PA, whilst limiting SB, particularly during the winter months.

  18. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  19. Seasonal atmospheric deposition variations of polychlorinated biphenyls (PCBs) and comparison of some deposition sampling techniques.

    Science.gov (United States)

    Birgül, Askın; Tasdemir, Yücel

    2011-03-01

    Ambient air and bulk deposition samples were collected between June 2008 and June 2009. Eighty-three polychlorinated biphenyl (PCB) congeners were targeted in the samples. The average gas and particle PCB concentrations were found as 393 ± 278 and 70 ± 102 pg/m(3), respectively, and 85% of the atmospheric PCBs were in the gas phase. Bulk deposition samples were collected by using a sampler made of stainless steel. The average PCB bulk deposition flux value was determined as 6,020 ± 4,350 pg/m(2) day. The seasonal bulk deposition fluxes were not statistically different from each other, but the summer flux had higher values. Flux values differed depending on the precipitation levels. The average flux value in the rainy periods was 7,480 ± 4,080 pg/m(2) day while the average flux value in dry periods was 5,550 ± 4,420 pg/m(2) day. The obtained deposition values were lower than the reported values given for the urban and industrialized areas, yet close to the ones for the rural sites. The reported deposition values were also influenced by the type of the instruments used. The average dry deposition and total deposition velocity values calculated based on deposition and concentration values were found as 0.23 ± 0.21 and 0.13 ± 0.13 cm/s, respectively.

  20. Axially perpendicular offset Raman scheme for reproducible measurement of housed samples in a noncircular container under variation of container orientation.

    Science.gov (United States)

    Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil

    2015-03-17

    An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.

  1. Control Charts for Processes with an Inherent Between-Sample Variation

    Directory of Open Access Journals (Sweden)

    Eva Jarošová

    2018-06-01

    Full Text Available A number of processes to which statistical control is applied are subject to various effects that cause random changes in the mean value. The removal of these fluctuations is either technologically impossible or economically disadvantageous under current conditions. The frequent occurrence of signals in the Shewhart chart due to these fluctuations is then undesirable and therefore the conventional control limits need to be extended. Several approaches to the design of the control charts with extended limits are presented in the paper and applied on the data from a real production process. The methods assume samples of size greater than 1. The performance of the charts is examined using the operating characteristic and average run length. The study reveals that in many cases, reducing the risk of false alarms is insufficient.

  2. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  3. Unwanted sexual advances at work: variations by employment arrangement in a sample of working Australians.

    Science.gov (United States)

    Lamontagne, Anthony D; Smith, Peter M; Louie, Amber M; Quinlan, Michael; Shoveller, Jean; Ostry, Aleck S

    2009-04-01

    We tested the hypothesis that the risk of experiencing unwanted sexual advances at work (UWSA) is greater for precariously-employed workers in comparison to those in permanent or continuing employment. A cross-sectional population-based telephone survey was conducted in Victoria (66% response rate, N=1,101). Employment arrangements were analysed using eight differentiated categories, as well as a four-category collapsed measure to address small cell sizes. Self-report of unwanted sexual advances at work was modelled using multiple logistic regression in relation to employment arrangement, controlling for gender, age, and occupational skill level. Forty-seven respondents reported UWSA in our sample (4.3%), mainly among women (37 of 47). Risk of UWSA was higher for younger respondents, but did not vary significantly by occupational skill level or education. In comparison to Permanent Full-Time, three employment arrangements were strongly associated with UWSA after adjustment for age, gender, and occupational skill level: Casual Full-Time OR = 7.2 (95% Confidence Interval 1.7-30.2); Fixed-Term Contract OR = 11.4 (95% CI 3.4-38.8); and Own-Account Self-Employed OR = 3.8 (95% CI 1.2-11.7). In analyses of females only, the magnitude of these associations was further increased. Respondents employed in precarious arrangements were more likely to report being exposed to UWSA, even after adjustment for age and gender. Greater protections from UWSA are likely needed for precariously employed workers.

  4. Method of estimating maximum VOC concentration in void volume of vented waste drums using limited sampling data: Application in transuranic waste drums

    International Nuclear Information System (INIS)

    Liekhus, K.J.; Connolly, M.J.

    1995-01-01

    A test program has been conducted at the Idaho National Engineering Laboratory to demonstrate that the concentration of volatile organic compounds (VOCs) within the innermost layer of confinement in a vented waste drum can be estimated using a model incorporating diffusion and permeation transport principles as well as limited waste drum sampling data. The model consists of a series of material balance equations describing steady-state VOC transport from each distinct void volume in the drum. The primary model input is the measured drum headspace VOC concentration. Model parameters are determined or estimated based on available process knowledge. The model effectiveness in estimating VOC concentration in the headspace of the innermost layer of confinement was examined for vented waste drums containing different waste types and configurations. This paper summarizes the experimental measurements and model predictions in vented transuranic waste drums containing solidified sludges and solid waste

  5. Comparative study of microfacies variation in two samples from the Chittenango member, Marcellus shale subgroup, western New York state, USA

    Energy Technology Data Exchange (ETDEWEB)

    Balulla, Shama, E-mail: shamamohammed77@outlook.com; Padmanabhan, E., E-mail: eswaran-padmanabhan@petronas.com.my [Department of Geoscience, Faculty of Geosciencs and Petroleum Engineering Universiti Teknologi PETRONAS, Tronoh (Malaysia); Over, Jeffrey, E-mail: over@geneseo.edu [Department of geological sciences, Geneseo, NY (United States)

    2015-07-22

    This study demonstrates the significant lithologic variations that occur within the two shale samples from the Chittenango member of the Marcellus shale formation from western New York State in terms of mineralogical composition, type of lamination, pyrite occurrences and fossil content using thin section detailed description and field emission Scanning electron microscope (FESEM) with energy dispersive X-Ray Spectrum (EDX). This study is classified samples as laminated clayshale and fossiliferous carbonaceous shale. The most important detrital constituents of these shales are the clay mineral illite and chlorite, quartz, organic matter, carbonate mineral, and pyrite. The laminated clayshale has a lower amount of quartz and carbonate minerals than fossiliferous carbonaceous shale while it has a higher amount of clay minerals (chlorite and illite) and organic matter. FESEM analysis confirms the presence of chlorite and illite. The fossil content in the laminated clayshale is much lower than the fossiliferous carbonaceous shale. This can provide greater insights about variations in the depositional and environmental factors that influenced its deposition. This result can be compiled with the sufficient data to be helpful for designing the horizontal wells and placement of hydraulic fracturing in shale gas exploration and production.

  6. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  7. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  8. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  9. Effect of electrical stimulation and cooking temperature on the within-sample variation of cooking loss and shear force of lamb.

    Science.gov (United States)

    Lewis, P K; Babiker, S A

    1983-01-01

    Electrical stimulation decreased the shear force and increased the cooking loss in seven paired lamb Longissimus dorsi (LD) muscles. This treatment did not have any effect on the within-sample variation. Cooking in 55°, 65° and 75°C water baths for 90 min caused a linear increase in the cooking loss and shear force. There was no stimulation-cooking temperature interaction observed. Cooking temperature also had no effect on the within-sample variation. A possible explanation as to why electrical stimulation did not affect the within-sample variation is given. Copyright © 1983. Published by Elsevier Ltd.

  10. Predicting location-specific extreme coastal floods in the future climate by introducing a probabilistic method to calculate maximum elevation of the continuous water mass caused by a combination of water level variations and wind waves

    Science.gov (United States)

    Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu

    2017-04-01

    Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and

  11. EARLY HEAD START FAMILIES' EXPERIENCES WITH STRESS: UNDERSTANDING VARIATIONS WITHIN A HIGH-RISK, LOW-INCOME SAMPLE.

    Science.gov (United States)

    Hustedt, Jason T; Vu, Jennifer A; Bargreen, Kaitlin N; Hallam, Rena A; Han, Myae

    2017-09-01

    The federal Early Head Start program provides a relevant context to examine families' experiences with stress since participants qualify on the basis of poverty and risk. Building on previous research that has shown variations in demographic and economic risks even among qualifying families, we examined possible variations in families' perceptions of stress. Family, parent, and child data were collected to measure stressors and risk across a variety of domains in families' everyday lives, primarily from self-report measures, but also including assay results from child cortisol samples. A cluster analysis was employed to examine potential differences among groups of Early Head Start families. Results showed that there were three distinct subgroups of families, with some families perceiving that they experienced very high levels of stress while others perceived much lower levels of stress despite also experiencing poverty and heightened risk. These findings have important implications in that they provide an initial step toward distinguishing differences in low-income families' experiences with stress, thereby informing interventions focused on promoting responsive caregiving as a possible mechanism to buffer the effects of family and social stressors on young children. © 2017 Michigan Association for Infant Mental Health.

  12. Task 08/41, Low temperature loop at the RA reactor, Review IV - Maximum temperature values in the samples without forced cooling; Zadatak 08/41, Niskotemperaturna petlja u reaktoru 'RA', Pregled IV - Maksimalne temperature u uzorcima bez prinudnog hladjenja

    Energy Technology Data Exchange (ETDEWEB)

    Zaric, Z [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1961-12-15

    The quantity of heat generated in the sample was calculated in the Review III. In stationary regime the heat is transferred through the air layer between the sample and the wall of the channel to the heavy water of graphite. Certain value of maximum temperature t{sub 0} is achieved in the sample. The objective of this review is determination of this temperature. [Serbo-Croat] Kolicina toplote generisana u uzorku, izracunata u pregledu III, u ravnoteznom stanju odvodi se kroz vazdusni sloj izmedju uzorka i zida kanala na tesku vodu odnosno grafit, pri cemu se u uzorku dostize izvesna maksimalna temperatura t{sub 0}. Odredjivanje ove temperature je predmet ovog pregleda.

  13. Daily variations of delta 18O and delta D in daily samplings of air water vapour and rain water in the Amazon Basin

    International Nuclear Information System (INIS)

    Matsui, E.; Salati, E.; Ribeiro, M.N.G.; Tancredi, A.C.F.N.S.; Reis, C.M. dos

    1984-01-01

    The movement of rain water in the soil from 0 to 120 cm depth using delta 18 O weekly variations is studied. A study of the delta D variability in water vapour and rain water samples during precipitation was also done, the samples being collected a 3 minute intervals from the beginning to the end of precipitation. (M.A.C.) [pt

  14. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  15. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  16. Spatial and seasonal variations of pesticide contamination in agricultural soils and crops sample from an intensive horticulture area of Hohhot, North-West China.

    Science.gov (United States)

    Zhang, Fujin; He, Jiang; Yao, Yiping; Hou, Dekun; Jiang, Cai; Zhang, Xinxin; Di, Caixia; Otgonbayar, Khureldavaa

    2013-08-01

    The spatial variability and temporal trend in concentrations of the organochlorine pesticides (OCPs), hexachlorocyclohexane (HCH) and dichlorodiphenyltrichloroethane (DDT), in soils and agricultural corps were investigated on an intensive horticulture area in Hohhot, North-West China, from 2008 to 2011. The most frequently found and abundant pesticides were the metabolites of DDT (p,p'-DDE, p,p'-DDT, o,p'-DDT and p,p'-DDD). Total DDT concentrations ranged from ND (not detectable) to 507.41 ng/g and were higher than the concentration of total HCHs measured for the range of 4.84-281.44 ng/g. There were significantly positive correlations between the ∑DDT and ∑HCH concentrations (r (2)>0.74) in soils, but no significant correlation was found between the concentrations of OCPs in soils and clay content while a relatively strong correlation was found between total OCP concentrations and total organic carbon (TOC). β-HCH was the main isomer of HCHs, and was detected in all samples; the maximum proportion of β-HCH compared to ∑HCHs (mean value 54%) was found, suggesting its persistence. The α/γ-HCH ratio was between 0.89 and 5.39, which signified the combined influence of technical HCHs and lindane. Low p,p'-DDE/p,p'-DDT in N1, N3 and N9 were found, reflecting the fresh input of DDTs, while the relatively high o,p'-DDT/p,p'-DDT ratios indicated the agricultural application of dicofol. Ratios of DDT/(DDE+DDD) in soils do not indicate recent inputs of DDT into Hohhot farmland soil environment. Seasonal variations of OCPs featured higher concentrations in autumn and lower concentrations in spring. This was likely associated with their temperature-driven re-volatilization and application of dicofol in late spring.

  17. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    2011-01-01

    and present a new estimator for the asymptotic "variance" of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies. Here we study the impact of the jump activity, of the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and of the presence of additional independent or dependent jumps in the volatility. We find that the finite sample performance of realised variance and, in particular, of log--transformed realised variance is generally good, whereas the jump--robust statistics tend to struggle in the presence...

  18. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  19. A simple method of correcting for variation of sample thickness in the determination of the activity of environmental samples by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a well established method of determining the activity of radioactive components in environmental samples. It is usual to maintain precisely the same counting geometry in measurements on samples under investigation as in the calibration measurements on standard materials of known activity, thus avoiding perceived uncertainties and complications in correcting for changes in counting geometry. However this may not always be convenient if, as on some occasions, only a small quantity of sample material is available for analysis. A procedure which avoids re-calibration for each sample size is described and is shown to be simple to use without significantly reducing the accuracy of measurement of the activity of typical environmental samples. The correction procedure relates to the use of cylindrical samples at a constant distance from the detector, the samples all having the same diameter but various thicknesses being permissible. (author)

  20. 14C measurement: effect of variations in sample preparation and storage on the counting efficiency for 14C using a carbo-sorb/permafluor E+ liquid scintillation cocktail

    International Nuclear Information System (INIS)

    Kramer, S.J.; Milton, G.M.; Repta, C.J.W.

    1995-06-01

    The effect of variations in sample preparation and storage on the counting efficiency for 14 C using a Carbo-Sorb/PermafluorE+ liquid scintillation cocktail has been studied, and optimum conditions are recommended. (author). 2 refs., 2 tabs., 4 figs

  1. Association of variation in Fc gamma receptor 3B gene copy number with rheumatoid arthritis in Caucasian samples

    NARCIS (Netherlands)

    McKinney, Cushla; Fanciulli, Manuela; Merriman, Marilyn E.; Phipps-Green, Amanda; Alizadeh, Behrooz Z.; Koeleman, Bobby P. C.; Dalbeth, Nicola; Gow, Peter J.; Harrison, Andrew A.; Highton, John; Jones, Peter B.; Stamp, Lisa K.; Steer, Sophia; Barrera, Pilar; Coenen, Marieke J. H.; Franke, Barbara; van Riel, Piet L. C. M.; Vyse, Tim J.; Aitman, Tim J.; Radstake, Timothy R. D. J.; Merriman, Tony R.

    2010-01-01

    Objective There is increasing evidence that variation in gene copy number (CN) influences clinical phenotype. The low-affinity Fc gamma receptor 3B (FCGR3B) located in the FCGR gene cluster is a CN polymorphic gene involved in the recruitment to sites of inflammation and activation of

  2. Association of variation in Fcgamma receptor 3B gene copy number with rheumatoid arthritis in Caucasian samples.

    NARCIS (Netherlands)

    McKinney, C.; Fanciulli, M.; Merriman, M.E.; Phipps-Green, A.; Alizadeh, B.Z.; Koeleman, B.P.; Dalbeth, N.; Gow, P.J.; Harrison, A.A.; Highton, J.; Jones, P.B.; Stamp, L.K.; Steer, S.; Barrera, P.; Coenen, M.J.H.; Franke, B.; Riel, P.L.C.M. van; Vyse, T.J.; Aitman, T.J.; Radstake, T.R.D.J.; Merriman, T.R.

    2010-01-01

    OBJECTIVE: There is increasing evidence that variation in gene copy number (CN) influences clinical phenotype. The low-affinity Fcgamma receptor 3B (FCGR3B) located in the FCGR gene cluster is a CN polymorphic gene involved in the recruitment to sites of inflammation and activation of

  3. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  4. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  5. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  6. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  7. Variation in the human lymphocyte sister chromatid exchange frequency as a function of time: results of daily and twice-weekly sampling

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, J.D.; Christensen, M.L.; Strout, C.L.; McGee, K.A.; Carrano, A.V.

    1987-01-01

    The variation in lymphocyte sister chromatid exchange (SCE) frequency was investigated in healthy nonsmokers who were not taking any medication. Two separate studies were undertaken. In the first, blood was drawn from four women twice a week for 8 weeks. These donors recorded the onset and termination of menstruation and times of illness. In the second study, blood was obtained from two women and two men for 5 consecutive days on two separate occasions initiated 14 days apart. Analysis of the mean SCE frequencies in each study indicated that significant temporal variation occurred in each donor, and that more variation occurred in the longer study. Some of the variation was found to be associated with the menstrual cycle. In the daily study, most of the variation appeared to be random, but occasional day-to-day changes occurred that were greater than those expected by chance. To determine how well a single SCE sample estimated the pooled mean for each donor in each study, the authors calculated the number of samples that encompassed that donor's pooled mean within 1 or more standard errors. For both studies, about 75% of the samples encompassed the pooled mean within 2 standard errors. An analysis of high-frequency cells (HFCs) was also undertaken. The results for each study indicate that the proportion of HFCs, compared with the use of Fisher's Exact test, is significantly more constant than the means, which were compared by using the t-test. These results coupled with our previous work suggest that HFC analysis may be the method of choice when analyzing data from human population studies.

  8. Sample collections from healthy volunteers for biological variation estimates' update: a new project undertaken by the Working Group on Biological Variation established by the European Federation of Clinical Chemistry and Laboratory Medicine.

    Science.gov (United States)

    Carobene, Anna; Strollo, Marta; Jonker, Niels; Barla, Gerhard; Bartlett, William A; Sandberg, Sverre; Sylte, Marit Sverresdotter; Røraas, Thomas; Sølvik, Una Ørvim; Fernandez-Calle, Pilar; Díaz-Garzón, Jorge; Tosato, Francesca; Plebani, Mario; Coşkun, Abdurrahman; Serteser, Mustafa; Unsal, Ibrahim; Ceriotti, Ferruccio

    2016-10-01

    Biological variation (BV) data have many fundamental applications in laboratory medicine. At the 1st Strategic Conference of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) the reliability and limitations of current BV data were discussed. The EFLM Working Group on Biological Variation is working to increase the quality of BV data by developing a European project to establish a biobank of samples from healthy subjects to be used to produce high quality BV data. The project involved six European laboratories (Milan, Italy; Bergen, Norway; Madrid, Spain; Padua, Italy; Istanbul, Turkey; Assen, The Netherlands). Blood samples were collected from 97 volunteers (44 men, aged 20-60 years; 43 women, aged 20-50 years; 10 women, aged 55-69 years). Initial subject inclusion required that participants completed an enrolment questionnaire to verify their health status. The volunteers provided blood specimens once per week for 10 weeks. A short questionnaire was completed and some laboratory tests were performed at each sampling consisting of blood collected under controlled conditions to provide serum, K2EDTA-plasma and citrated-plasma samples. Samples from six out of the 97 enroled subjects were discarded as a consequence of abnormal laboratory measurements. A biobank of 18,000 aliquots was established consisting of 120 aliquots of serum, 40 of EDTA-plasma, and 40 of citrated-plasma from each subject. The samples were stored at -80 °C. A biobank of well-characterised samples collected under controlled conditions has been established delivering a European resource to enable production of contemporary BV data.

  9. Association between genetic variation in a region on chromosome 11 and schizophrenia in large samples from Europe

    DEFF Research Database (Denmark)

    Rietschel, M; Mattheisen, M; Degenhardt, F

    2012-01-01

    the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....

  10. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    Science.gov (United States)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  11. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  12. Cost-effective sampling of 137Cs-derived net soil redistribution: part 1 – estimating the spatial mean across scales of variation

    International Nuclear Information System (INIS)

    Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.

    2015-01-01

    The 137 Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many 137 Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of 137 Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954–2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate 137 Cs-derived net soil redistribution across scales of variation. - Highlights: • The 137 Cs technique estimates net time-integrated soil redistribution by all processes. • It is time-consuming and dominated by studies of individual fields. • We use limited resources to estimate soil

  13. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  14. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  15. Diurnal Variation and Spatial Distribution Effects on Sulfur Speciation in Aerosol Samples as Assessed by X-Ray Absorption Near-Edge Structure (XANES

    Directory of Open Access Journals (Sweden)

    Siwatt Pongpiachan

    2012-01-01

    Full Text Available This paper focuses on providing new results relating to the impacts of Diurnal variation, Vertical distribution, and Emission source on sulfur K-edge XANES spectrum of aerosol samples. All aerosol samples used in the diurnal variation experiment were preserved using anoxic preservation stainless cylinders (APSCs and pressure-controlled glove boxes (PCGBs, which were specially designed to prevent oxidation of the sulfur states in PM10. Further investigation of sulfur K-edge XANES spectra revealed that PM10 samples were dominated by S(VI, even when preserved in anoxic conditions. The “Emission source effect” on the sulfur oxidation state of PM10 was examined by comparing sulfur K-edge XANES spectra collected from various emission sources in southern Thailand, while “Vertical distribution effects” on the sulfur oxidation state of PM10 were made with samples collected from three different altitudes from rooftops of the highest buildings in three major cities in Thailand. The analytical results have demonstrated that neither “Emission source” nor “Vertical distribution” appreciably contribute to the characteristic fingerprint of sulfur K-edge XANES spectrum in PM10.

  16. Long-term sampling of CO2 from waste-to-energy plants: 14C determination methodology, data variation and uncertainty

    DEFF Research Database (Denmark)

    Fuglsang, Karsten; Pedersen, Niels Hald; Larsen, Anna Warberg

    2014-01-01

    A dedicated sampling and measurement method was developed for long-term measurements of biogenic and fossil-derived CO2 from thermal waste-to-energy processes. Based on long-term sampling of CO2 and 14C determination, plant-specific emission factors can be determined more accurately, and the annual...... emission of fossil CO2 from waste-to-energy plants can be monitored according to carbon trading schemes and renewable energy certificates. Weekly and monthly measurements were performed at five Danish waste incinerators. Significant variations between fractions of biogenic CO2 emitted were observed...... was ± 4.0 pmC (95 % confidence interval) at 62 pmC. The long-term sampling method was found to be useful for waste incinerators for determination of annual fossil and biogenic CO2 emissions with relatively low uncertainty....

  17. Seasonal variation of 222Rn in seawater samples from Ubatuba embayments, SP, Brazil, for the assessment of submarine groundwater discharge

    International Nuclear Information System (INIS)

    Lopes, Patricia da Costa

    2005-01-01

    We describe here an application of excess 222 Rn to estimate SGD in a series of small embayments of Ubatuba, Sao Paulo State, Brazil, covering latitudes between 23 deg 26'S and 23 deg 46'S and longitudes between 45 deg 02'W e 45 deg 11'W. Excess 222 Rn inventories obtained in 24 vertical profiles established from March/03 to July/05 varied from 345 ±±24 to 18,700 ± 4,900 dpm/m 2 . The highest inventories of excess 222 Rn were observed both in Flamengo and Fortaleza embayments, during summer campaigns (rainy season). The estimated total fluxes required to support inventories measured varied from 62 ± 4 to 3,385 +- 880 dpm/m 2 d. Considering these results, the SGD advective rates necessary to balance the fluxes calculated in Ubatuba embayments ranged from 0.1 x 10 -1 to 1.9 cm/d. Taking into account all SGD fluxes obtained, the percentual variability was 89% (seasonal variation in 3 years period, n = 24 measurements). Although, if we consider each year of study separately, the respective percentual variabilities estimated are 72% in 2003 (n = 10 measurements), 127% in 2004 (n = 6 measurements) and 97% in 2005 (n = 8 measurements). (author)

  18. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  19. Genesis Solar Wind Sample 61422: Experiment in Variation of Sequence of Cleaning Solvent for Removing Carbon-Bearing Contamination

    Science.gov (United States)

    Allton, J. H.; Kuhlman, K. R.; Allums, K. K.; Gonzalez, C. P.; Jurewicz, A. J. G.; Burnett, D. S.; Woolum, D. S.

    2015-01-01

    The recovered Genesis collector fragments are heavily contaminated with crash-derived particulate debris. However, megasonic treatment with ultra-pure-water (UPW; resistivity (is) greater than18 meg-ohm-cm) removes essentially all particulate contamination greater than 5 microns in size [e.g.1] and is thus of considerable importance. Optical imaging of Si sample 60336 revealed the presence of a large C-rich particle after UPW treatment that was not present prior to UPW. Such handling contamination is occasionally observed, but such contaminants are normally easily removed by UPW cleaning. The 60336 particle was exceptional in that, surprisingly, it was not removed by additional UPW or by hot xylene or by aqua regia treatment. It was eventually removed by treatment with NH3-H2O2. Our best interpretation of the origin of the 60336 particle was that it was adhesive from the Post-It notes used to stabilize samples for transport from Utah after the hard landing. It is possible that the insoluble nature of the 60336 particle comes from interaction of the Post-It adhesive with UPW. An occasional bit of Post-It adhesive is not a major concern, but C particulate contamination also occurs from the heat shield of the Sample Return Capsule (SRC) and this is mixed with inorganic contamination from the SRC and the Utah landing site. If UPW exposure also produced an insoluble residue from SRC C, this would be a major problem in chemical treatments to produce clean surfaces for analysis. This paper reports experiments to test whether particulate contamination was removed more easily if UPW treatment was not used.

  20. Brief Communication: Intertooth and Intrafacet Dental Microwear Variation in an Archaeological Sample of Modern Humans From the Jordan Valley

    OpenAIRE

    Mahoney, Patrick

    2006-01-01

    Dental microwear was recorded in a Bronze-Iron Age (3570–3000 BP) sample of modern humans recovered from Tell es-Sa'idiyeh in the Jordan Valley. Microwear patterns were compared between mandibular molars, and between the upper and lower part of facet 9. The comparison revealed a greater frequency of pits and shorter scratches on the second and third molars, compared to the first. Pit frequency also increased on the lower part of the facet on the first molar, compared to the upper part. These ...

  1. Regional, geographic, and racial/ethnic variation in glycemic control in a national sample of veterans with diabetes.

    Science.gov (United States)

    Egede, Leonard E; Gebregziabher, Mulugeta; Hunt, Kelly J; Axon, Robert N; Echols, Carrae; Gilbert, Gregory E; Mauldin, Patrick D

    2011-04-01

    We performed a retrospective analysis of a national cohort of veterans with diabetes to better understand regional, geographic, and racial/ethnic variation in diabetes control as measured by HbA(1c). A retrospective cohort study was conducted in a national cohort of 690,968 veterans with diabetes receiving prescriptions for insulin or oral hypoglycemic agents in 2002 that were followed over a 5-year period. The main outcome measures were HbA(1c) levels (as continuous and dichotomized at ≥8.0%). Relative to non-Hispanic whites (NHWs), HbA(1c) levels remained 0.25% higher in non-Hispanic blacks (NHBs), 0.31% higher in Hispanics, and 0.14% higher in individuals with other/unknown/missing racial/ethnic group after controlling for demographics, type of medication used, medication adherence, and comorbidities. Small but statistically significant geographic differences were also noted with HbA(1c) being lowest in the South and highest in the Mid-Atlantic. Rural/urban location of residence was not associated with HbA(1c) levels. For the dichotomous outcome poor control, results were similar with race/ethnic group being strongly associated with poor control (i.e., odds ratios of 1.33 [95% CI 1.31-1.35] and 1.57 [1.54-1.61] for NHBs and Hispanics vs. NHWs, respectively), geographic region being weakly associated with poor control, and rural/urban residence being negligibly associated with poor control. In a national longitudinal cohort of veterans with diabetes, we found racial/ethnic disparities in HbA(1c) levels and HbA(1c) control; however, these disparities were largely, but not completely, explained by adjustment for demographic characteristics, medication adherence, type of medication used to treat diabetes, and comorbidities.

  2. Effects of the variation of samples geometry on radionuclide calibrator response for radiopharmaceuticals used in nuclear medicine

    Energy Technology Data Exchange (ETDEWEB)

    Albuquerque, Antonio Morais de Sa; Fragoso, Maria Conceicao de Farias; Oliveira, Mercia L. [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2011-07-01

    In the nuclear medicine practice, the accurate knowledge of the activity of radiopharmaceuticals which will be administered to the subjects is an important factor to ensure the success of diagnosis or therapy. The instrument used for this purpose is the radionuclide calibrator. The radiopharmaceuticals are usually contained on glass vials or syringes. However, the radionuclide calibrators response is sensitive to the measurement geometry. In addition, the calibration factors supplied by manufactures are valid only for single sample geometry. To minimize the uncertainty associated with the activity measurements, it is important to use the appropriate corrections factors for the each radionuclide in the specific geometry in which the measurement is to be made. The aims of this work were to evaluate the behavior of radionuclide calibrators varying the geometry of radioactive sources and to determine experimentally the correction factors for different volumes and containers types commonly used in nuclear medicine practice. The measurements were made in two ionization chambers of different manufacturers (Capintec and Biodex), using four radionuclides with different photon energies: {sup 18}F, {sup 99m}Tc, {sup 131}I and {sup 201}Tl. The results confirm the significant dependence of radionuclide calibrators reading on the sample geometry, showing the need of use correction factors in order to minimize the errors which affect the activity measurements. (author)

  3. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  4. Inter-laboratory variation in the chemical analysis of acidic forest soil reference samples from eastern North America

    Science.gov (United States)

    Ross, Donald S.; Bailiey, Scott W; Briggs, Russell D; Curry, Johanna; Fernandez, Ivan J.; Fredriksen, Guinevere; Goodale, Christine L.; Hazlett, Paul W.; Heine, Paul R; Johnson, Chris E.; Larson, John T; Lawrence, Gregory B.; Kolka, Randy K; Ouimet, Rock; Pare, D; Richter, Daniel D.; Shirmer, Charles D; Warby, Richard A.F.

    2015-01-01

    Long-term forest soil monitoring and research often requires a comparison of laboratory data generated at different times and in different laboratories. Quantifying the uncertainty associated with these analyses is necessary to assess temporal changes in soil properties. Forest soil chemical properties, and methods to measure these properties, often differ from agronomic and horticultural soils. Soil proficiency programs do not generally include forest soil samples that are highly acidic, high in extractable Al, low in extractable Ca and often high in carbon. To determine the uncertainty associated with specific analytical methods for forest soils, we collected and distributed samples from two soil horizons (Oa and Bs) to 15 laboratories in the eastern United States and Canada. Soil properties measured included total organic carbon and nitrogen, pH and exchangeable cations. Overall, results were consistent despite some differences in methodology. We calculated the median absolute deviation (MAD) for each measurement and considered the acceptable range to be the median 6 2.5 3 MAD. Variability among laboratories was usually as low as the typical variability within a laboratory. A few areas of concern include a lack of consistency in the measurement and expression of results on a dry weight basis, relatively high variability in the C/N ratio in the Bs horizon, challenges associated with determining exchangeable cations at concentrations near the lower reporting range of some laboratories and the operationally defined nature of aluminum extractability. Recommendations include a continuation of reference forest soil exchange programs to quantify the uncertainty associated with these analyses in conjunction with ongoing efforts to review and standardize laboratory methods.

  5. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  6. Sources of variation on the mini-mental state examination in a population-based sample of centenarians.

    Science.gov (United States)

    Dai, Ting; Davey, Adam; Woodard, John L; Miller, Lloyd Stephen; Gondo, Yasuyuki; Kim, Seock-Ho; Poon, Leonard W

    2013-08-01

    Centenarians represent a rare but rapidly growing segment of the oldest-old. This study presents item-level data from the Mini-Mental State Examination (MMSE) in a cross-sectional, population-based sample of 244 centenarians and near-centenarians (aged 98-108, 16% men, 21% African-American, 38% community dwelling) from the Georgia Centenarian Study (2001-2008) according to age, education, sex, race, and residential status. Multiple-Indicator Multiple-Cause (MIMIC) models were used to identify systematic domain-level differences in MMSE scores according to demographic characteristics in this age group. Indirect effects of age, educational attainment, race, and residential status were found on MMSE scores. Direct effects were limited to concentration for education and race and orientation for residential status. Mean levels of cognitive functioning in centenarians were low, with mean values below most commonly-used cutoffs. Overall scores on the MMSE differed as a function of age, education, race, and residential status, with differences in scale performance limited primarily to concentration and orientation and no evidence of interactions between centenarian characteristics. Adjusting for education was not sufficient to account for differences according to race, and adjusting for residential status was not sufficient to account for differences according to age. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.

  7. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  8. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  9. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    Science.gov (United States)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM

  10. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  11. Genetic variation in the CYP1A1 gene is related to circulating PCB118 levels in a population-based sample

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Lars [Department of Medical Sciences, Cardiovascular Epidemiology, Uppsala University, Uppsala (Sweden); Penell, Johanna [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden); Syvänen, Anne-Christine; Axelsson, Tomas [Department of Medical Sciences, Molecular Medicine and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Ingelsson, Erik [Department of Medical Sciences, Molecular Epidemiology and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Morris, Andrew P.; Lindgren, Cecilia [Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Salihovic, Samira; Bavel, Bert van [MTM Research Centre, School of Science and Technology, Örebro University, Örebro (Sweden); Lind, P. Monica, E-mail: monica.lind@medsci.uu.se [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden)

    2014-08-15

    Several of the polychlorinated biphenyls (PCBs), i.e. the dioxin-like PCBs, are known to induce the P450 enzymes CYP1A1, CYP1A2 and CYP1B1 by activating the aryl hydrocarbon receptor (Ah)-receptor. We evaluated if circulating levels of PCBs in a population sample were related to genetic variation in the genes encoding these CYPs. In the population-based Prospective Investigation of the Vasculature in Uppsala Seniors (PIVUS) study (1016 subjects all aged 70), 21 SNPs in the CYP1A1, CYP1A2 and CYP1B1 genes were genotyped. Sixteen PCB congeners were analysed by high-resolution chromatography coupled to high-resolution mass spectrometry (HRGC/ HRMS). Of the investigated relationships between SNPs in the CYP1A1, CYP1A2 and CYP1B1 and six PCBs (congeners 118, 126, 156, 169, 170 and 206) that captures >80% of the variation of all PCBs measured, only the relationship between CYP1A1 rs2470893 was significantly related to PCB118 levels following strict adjustment for multiple testing (p=0.00011). However, there were several additional SNPs in the CYP1A2 and CYP1B1 that showed nominally significant associations with PCB118 levels (p-values in the 0.003–0.05 range). Further, several SNPs in the CYP1B1 gene were related to both PCB156 and PCB206 with p-values in the 0.005–0.05 range. Very few associations with p<0.05 were seen for PCB126, PCB169 or PCB170. Genetic variation in the CYP1A1 was related to circulating PCB118 levels in the general elderly population. Genetic variation in CYP1A2 and CYP1B1 might also be associated with other PCBs. - Highlights: • We studied the relationship between PCBs and the genetic variation in the CYP genes. • Cross sectional data from a cohort of elderly were analysed. • The PCB levels were evaluated versus 21 SNPs in three CYP genes. • PCB 118 was related to variation in the CYP1A1 gene.

  12. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  13. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  14. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  15. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  16. Simulated variations of eolian dust from inner Asian deserts at the mid-Pliocene, last glacial maximum, and present day: contributions from the regional tectonic uplift and global climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Zhengguo; Liu, Xiaodong; An, Zhisheng [Chinese Academy of Sciences, State Key Laboratory of Loess Quaternary Geology (SKLLQG), Institute of Earth Environment, Xi' an (China); Yi, Bingqi; Yang, Ping [Texas A and M University, College Station, TX (United States); Mahowald, Natalie [Cornell University, Ithaca, NY (United States)

    2011-12-15

    Northern Tibetan Plateau uplift and global climate change are regarded as two important factors responsible for a remarkable increase in dust concentration originating from inner Asian deserts during the Pliocene-Pleistocene period. Dust cycles during the mid-Pliocene, last glacial maximum (LGM), and present day are simulated with a global climate model, based on reconstructed dust source scenarios, to evaluate the relative contributions of the two factors to the increment of dust sedimentation fluxes. In the focused downwind regions of the Chinese Loess Plateau/North Pacific, the model generally produces a light eolian dust mass accumulation rate (MAR) of 7.1/0.28 g/cm{sup 2}/kyr during the mid-Pliocene, a heavier MAR of 11.6/0.87 g/cm{sup 2}/kyr at present, and the heaviest MAR of 24.5/1.15 g/cm{sup 2}/kyr during the LGM. Our results are in good agreement with marine and terrestrial observations. These MAR increases can be attributed to both regional tectonic uplift and global climate change. Comparatively, the climatic factors, including the ice sheet and sea surface temperature changes, have modulated the regional surface wind field and controlled the intensity of sedimentation flux over the Loess Plateau. The impact of the Tibetan Plateau uplift, which increased the areas of inland deserts, is more important over the North Pacific. The dust MAR has been widely used in previous studies as an indicator of inland Asian aridity; however, based on the present results, the interpretation needs to be considered with greater caution that the MAR is actually not only controlled by the source areas but the surface wind velocity. (orig.)

  17. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  18. Spatial variation of contaminant elements of roadside dust samples from Budapest (Hungary) and Seoul (Republic of Korea), including Pt, Pd and Ir.

    Science.gov (United States)

    Sager, Manfred; Chon, Hyo-Taek; Marton, Laszlo

    2015-02-01

    Roadside dusts were studied to explain the spatial variation and present levels of contaminant elements including Pt, Pd and Ir in urban environment and around Budapest (Hungary) and Seoul (Republic of Korea). The samples were collected from six sites of high traffic volumes in Seoul metropolitan city and from two control sites within the suburbs of Seoul, for comparison. Similarly, road dust samples were obtained two times from traffic focal points in Budapest, from the large bridges across the River Danube, from Margitsziget (an island in the Danube in the northern part of Budapest, used for recreation) as well as from main roads (no highways) outside Budapest. The samples were analysed for contaminant elements by ICP-AES and for Pt, Pd and Ir by ICP-MS. The highest Pt, Pd and Ir levels in road dusts were found from major roads with high traffic volume, but correlations with other contaminant elements were low, however. This reflects automobile catalytic converter to be an important source. To interpret the obtained multi-element results in short, pollution index, contamination index and geo-accumulation index were calculated. Finally, the obtained data were compared with total concentrations encountered in dust samples from Madrid, Oslo, Tokyo and Muscat (Oman). Dust samples from Seoul reached top level concentrations for Cd-Zn-As-Co-Cr-Cu-Mo-Ni-Sn. Just Pb was rather low because unleaded gasoline was introduced as compulsory in 1993. Concentrations in Budapest dust samples were lower than from Seoul, except for Pb and Mg. Compared with Madrid as another continental site, Budapest was higher in Co-V-Zn. Dust from Oslo, which is not so large, contained more Mn-Na-Sr than dust from other towns, but less other metals.

  19. On the Use of Biomineral Oxygen Isotope Data to Identify Human Migrants in the Archaeological Record: Intra-Sample Variation, Statistical Methods and Geographical Considerations.

    Directory of Open Access Journals (Sweden)

    Emma Lightfoot

    Full Text Available Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals' homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific

  20. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  1. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  2. Contributions of food categories to absolute nutrient intake and between-person variation within a representative sample of 2677 Norwegian men and women

    Directory of Open Access Journals (Sweden)

    Annhild Mosdøl

    2009-11-01

    Full Text Available  ABSTRACTSemi-quantitative food frequency data from a nation-wide, representative sample of 2677 Norwegianmen and women were analysed to identify food categories contributing most to absolute intake andbetween-person variation in intake of energy and nine nutrients. The 149 food categories in the questionnairewere ranked according to their contribution to absolute nutrient intake, and categories contributingat least 0.5% to the average absolute intake were included in a stepwise regression model. Thenumber of food categories explaining 90% of the between-person variation varied from 2 categories forb -carotene to 33 for a-tocopherol. The models accounted for 53–76% of the estimated absolute nutrientintakes. These analyses present a meaningful way of restricting the number of food categories inquestionnaires aimed at capturing the between-person variation in energy or specific nutrient intakes.NORSK SAMMENDRAGSemikvantitative matvarefrekvensdata fra et landsrepresentativt utvalg av 2677 norske menn og kvinnerble analysert for å identifisere de matvarekategoriene som bidro mest til absolutt inntak og til variasjoni inntak mellom individer for energi og ni næringsstoffer. De 149 matvarekategoriene ble rangert iforhold til deres bidrag til inntaket av et næringsstoff, og de kategoriene som bidro med minst 0,5% avgjennomsnittlig inntak ble inkludert i en trinnvis regresjonsmodell. Antallet kategorier som forklarte90% av variasjonen mellom individer varierte fra 2 kategorier for b-karoten til 33 for a-tokoferol.Modellene inkluderte 53–76% av det estimerte absoluttinntaket av næringsstoffene. Disse analysenepeker på en meningsfylt måte å redusere antall kostspørsmål i spørreskjema som er rettet mot å fangeopp variasjonen i inntak av energi og utvalgte næringsstoffer mellom personer.

  3. A drink is a drink? Variation in the amount of alcohol contained in beer, wine and spirits drinks in a US methodological sample.

    Science.gov (United States)

    Kerr, William C; Greenfield, Thomas K; Tujague, Jennifer; Brown, Stephan E

    2005-11-01

    Empirically based estimates of the mean alcohol content of beer, wine and spirits drinks from a national sample of US drinkers are not currently available. A sample of 310 drinkers from the 2000 National Alcohol Survey were re-contacted to participate in a telephone survey with specific questions about the drinks they consume. Subjects were instructed to prepare their usual drink of each beverage at home and to measure each alcoholic beverage and other ingredients with a provided beaker. Information on the brand or type of each beverage was used to specify the percentage of alcohol. The weighted mean alcohol content of respondents' drinks was 0.67 ounces overall, 0.56 ounces for beer, 0.66 ounces for wine and 0.89 ounces for spirits. Spirits and wine drink contents were particularly variable with many high-alcohol drinks observed. While the 0.6-ounce of alcohol drink standard appears to be a reasonable single standard, it cannot capture the substantial variation evident in this sample and it underestimates average wine and spirits ethanol content. Direct measurement or beverage-specific mean ethanol content estimates would improve the precision of survey alcohol assessment.

  4. Seasonal variation in bacterial heavy metal bio sorption in water samples from Eziama river near soap and brewery industries and environmental health implications

    International Nuclear Information System (INIS)

    Kanu, I.; Achi, O. K.; Ezeronye, O. U.; Anyanwu, E. C.

    2006-01-01

    Seasonal variation in bacterial heavy metals bio sorption from soap and brewery industrial effluent samples from Eziama River in Abia State were analyzed for Pb, Hg, Fe, Zn, As, and Mn, using atomic absorption spectrophotometry. Bioaccumulation of the metals by bacteria showed the following trend > Fe >Zn >As > Pb > Mn (Rainy Season) and Zn > Fe > Mn > As > Hg > Pb (Dry season). Statistical analysis using of variance (ANOVA) showed significant differences in concentrations of Pb, Hg, Fe, Zn, As, and Mn level between the sampling zones at Eziama River. Seasonal changes in heavy metal concentrations, showed increases in Pb, Fe, and As from 1.32 x 10 5m g/L in the rainy season to 1.42 x 10 5m g/L in the dry season. Fe increased from 40.35 x 10 5m g/L to 42.1 x 10 5m g/L while As increased from 2.32 to 2.48 x 10 5m g/L with a net increases of +56 and + 69 x 10 5m g/L respectively. However, Hg, Zn, and Mn concentrations decreased in the rainy season from 40.54 x 10 5m g/L to 39.24 x l0 5m g/L 1.65 to 0.62 x l0 5m g/L respectively

  5. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  6. Significant performance variation among PCR systems in diagnosing congenital toxoplasmosis in São Paulo, Brazil: analysis of 467 amniotic fluid samples

    Directory of Open Access Journals (Sweden)

    Thelma Suely Okay

    2009-03-01

    Full Text Available INTRODUCTION: Performance variation among PCR systems in detecting Toxoplasma gondii has been extensively reported and associated with target genes, primer composition, amplification parameters, treatment during pregnancy, host genetic susceptibility and genotypes of different parasites according to geographical characteristics. PATIENTS: A total of 467 amniotic fluid samples from T. gondii IgM- and IgG-positive Brazilian pregnant women being treated for 1 to 6 weeks at the time of amniocentesis (gestational ages of 14 to 25 weeks. METHODS: One nested-B1-PCR and three one-round amplification systems targeted to rDNA, AF146527 and the B1 gene were employed. RESULTS: Of the 467 samples, 189 (40.47% were positive for one-round amplifications: 120 (63.49% for the B1 gene, 24 (12.69% for AF146527, 45 (23.80% for both AF146527 and the B1 gene, and none for rDNA. Fifty previously negative one-round PCR samples were chosen by computer-assisted randomization analysis and re-tested (nested-B1-PCR, during which nine additional cases were detected (9/50 or 18%. DISCUSSION: The B1 gene PCR was far more sensitive than the AF146527 PCR, and the rDNA PCR was the least effective even though the rDNA had the most repetitive sequence. Considering that the four amplification systems were equally affected by treatment, that the amplification conditions were optimized for the target genes and that most of the primers have already been reported, it is plausible that the striking differences found among PCR performances could be associated with genetic diversity in patients and/or with different Toxoplasma gondii genotypes occurring in Brazil. CONCLUSION: The use of PCR for the diagnosis of fetal Toxoplasma infections in Brazil should be targeted to the B1 gene when only one gene can be amplified, preferably by nested amplification with primers B22/B23.

  7. Variation in levels of serum inhibin B, testosterone, estradiol, luteinizing hormone, follicle-stimulating hormone, and sex hormone-binding globulin in monthly samples from healthy men during a 17-month period

    DEFF Research Database (Denmark)

    Andersson, Anna-Maria; Carlsen, Elisabeth; Petersen, Jørgen Holm

    2003-01-01

    To obtain information on the scale of the intraindividual variation in testicular hormone, blood samples for inhibin B determination were collected monthly in 27 healthy male volunteers during a 17-month period. In addition, the traditional reproductive hormones FSH, LH, testosterone, estradiol....... A seasonal variation was observed in LH and testosterone levels, but not in the levels of the other hormones. The seasonal variation in testosterone levels could be explained by the variation in LH levels. The seasonal variation in LH levels seemed to be related to the mean air temperature during the month...... levels in men. The peak levels of both LH and testosterone were observed during June-July, with minimum levels present during winter-early spring. Air temperature, rather than light exposure, seems to be a possible climatic variable explaining the seasonal variation in LH levels....

  8. Ethnic variations in the relationship between multiple stress domains and use of several types of tobacco/nicotine products among a diverse sample of adults

    Directory of Open Access Journals (Sweden)

    Christopher J. Rogers

    2018-06-01

    Full Text Available Introduction: Financial strain and discrimination are consistent predictors of negative health outcomes and maladaptive coping behaviors, including tobacco use. Although there is considerable information exploring stress and smoking, limited research has examined the relationship between patterns of stress domains and specific tobacco/nicotine product use. Even fewer studies have assessed ethnic variations in these relationships. Methods: This study investigated the relationship between discrimination and financial strain and current tobacco/nicotine product use and explored the ethnic variation in these relationships among diverse sample of US adults (N = 1068. Separate logistic regression models assessed associations between stress domains and tobacco/nicotine product use, adjusting for covariates (e.g., age, gender, race/ethnicity, and household income. Due to statistically significant differences, the final set of models was stratified by race/ethnicity. Results: Higher levels of discrimination were associated with higher odds of all three tobacco/nicotine product categories. Financial strain was positively associated with combustible tobacco and combined tobacco/nicotine product use. Financial strain was especially risky for Non-Hispanic Whites (AOR:1.191, 95%CI:1.083–1.309 and Blacks/African Americans (AOR:1.542, 95%CI:1.106–2.148, as compared to other groups, whereas discrimination was most detrimental for Asians/Pacific Islanders (AOR:3.827, 95%CI:1.832–7.997 and Hispanics/Latinas/Latinos (AOR:2.517, 95%CI:1.603–3.952. Conclusions: Findings suggest discrimination and financial stressors are risk factors for use of multiple tobacco/nicotine products, highlighting the importance of prevention research that accounts for these stressors. Because ethnic groups may respond differently to stress/strain, prevention research needs to identify cultural values, beliefs, and coping strategies that can buffer the negative consequences of

  9. Predicted mineral melt formation by BCURA Coal Sample Bank coals: Variation with atmosphere and comparison with reported ash fusion test data

    Energy Technology Data Exchange (ETDEWEB)

    D. Thompson [University of Sheffield (United Kingdom). Department of Engineering Materials

    2010-08-15

    The thermodynamic equilibrium phases formed under ash fusion test and excess air combustion conditions by 30 coals of the BCURA Coal Sample Bank have been predicted from 1100 to 2000 K using the MTDATA computational suite and the MTOX database for silicate melts and associated phases. Predicted speciation and degree of melting varied widely from coal to coal. Melting under an ash fusion test atmosphere of CO{sub 2}:H{sub 2} 1:1 was essentially the same as under excess air combustion conditions for some coals, and markedly different for others. For those ashes which flowed below the fusion test maximum temperature of 1773 K flow coincided with 75-100% melting in most cases. Flow at low predicted melt formation (46%) for one coal cannot be attributed to any one cause. The difference between predicted fusion behaviours under excess air and fusion test atmospheres becomes greater with decreasing silica and alumina, and increasing iron, calcium and alkali metal content in the coal mineral. 22 refs., 7 figs., 3 tabs.

  10. Outdoor radon variation in Romania

    International Nuclear Information System (INIS)

    Simion, Elena; Simion, Florin

    2008-01-01

    Full text: The results of a long-term survey (1992 - 2006) of the variations of outdoor radon concentrations in semi-natural location from Romania are reported in the present paper. Measurements, covering between two and four sessions of the day (morning, afternoon, evening and night), were performed on a daily bases by 37 Environmental Radioactivity Monitoring Stations from National Environmental Radioactivity Survey Network. The method used was based on indirect determination of outdoor radon from aerosol samples collected on glass micro-fibre filters by drawing the air through the filters. The sampling was performed in a fixed place at a height of 2 m above the ground surface. Total beta counting of aerosol samples collected was performed immediately and after 20 hours. Values recorded during the years of continuous measurement indicated the presence of several patterns in the long-term variation of outdoor radon concentration: diurnal, seasonal and annual variation. For diurnal variation, outdoor radon concentration shows a maximum values in the night (early hours) and minimum values by day (in the afternoon). On average, this maximum is a factor of 2 higher than the minimum. Late autumn - beginning of winter maximum and an early spring minimum are characteristic for seasonal patterns. In the long term a seasonal pattern was observed for diurnal variation, with an average diurnal maximum to minimum ratio of 1.33 in winter compared with 3.0 in the summer months. The variations of outdoor radon levels showed little correlation with the uranium concentration of the ground and were attributed to changes in soil moisture content. In dry seasons, because of the low precipitation, the soil was drying out in the summer allowing fractures to develop and radon to migrate easily through the ground. Depending on micro-climatic and geological conditions, outdoor radon average concentrations in different regions of Romania are from 1200 mBq/mc to 13065 mBq/mc. The smallest

  11. High Throughput qPCR Expression Profiling of Circulating MicroRNAs Reveals Minimal Sex- and Sample Timing-Related Variation in Plasma of Healthy Volunteers.

    Directory of Open Access Journals (Sweden)

    Catherine Mooney

    Full Text Available MicroRNAs are a class of small non-coding RNA that regulate gene expression at a post-transcriptional level. MicroRNAs have been identified in various body fluids under normal conditions and their stability as well as their dysregulation in disease opens up a new field for biomarker study. However, diurnal and day-to-day variation in plasma microRNA levels, and differential regulation between males and females, may affect biomarker stability. A QuantStudio 12K Flex Real-Time PCR System was used to profile plasma microRNA levels using OpenArray in male and female healthy volunteers, in the morning and afternoon, and at four time points over a one month period. Using this system we were able to run four OpenArray plates in a single run, the equivalent of 32 traditional 384-well qPCR plates or 12,000 data points. Up to 754 microRNAs can be identified in a single plasma sample in under two hours. 108 individual microRNAs were identified in at least 80% of all our samples which compares favourably with other reports of microRNA profiles in serum or plasma in healthy adults. Many of these microRNAs, including miR-16-5p, miR-17-5p, miR-19a-3p, miR-24-3p, miR-30c-5p, miR-191-5p, miR-223-3p and miR-451a are highly expressed and consistent with previous studies using other platforms. Overall, microRNA levels were very consistent between individuals, males and females, and time points and we did not detect significant differences in levels of microRNAs. These results suggest the suitability of this platform for microRNA profiling and biomarker discovery and suggest minimal confounding influence of sex or sample timing. However, the platform has not been subjected to rigorous validation which must be demonstrated in future biomarker studies where large differences may exist between disease and control samples.

  12. Investigation of the variation of the specific heat capacity of local soil samples from the Niger delta, Nigeria with moisture content

    International Nuclear Information System (INIS)

    Ofoegbu, C.O.; Adjepong, S.K.

    1987-11-01

    Results of an investigation of the variation, with moisture content, of the specific heat capacity of samples of three texturally different types of soil (clayey, sandy and sandy loam) obtained from the Niger delta area of Nigeria, are presented. The results show that the specific heat capacities of the soils studied, increase with moisture content. This increase is found to be linear for the entire range of moisture contents considered (0-25%), in the case of the sandy loam soil while for the clayey and sandy soils the specific heat capacity is found to increase linearly with moisture content up to about 15% after which the increase becomes parabolic. The rate of increase of specific heat capacity with moisture content appears to be highest in the clayey soil and lowest in the sandy soil. It is thought that the differences in the rates of increase of specific heat capacity with moisture content, observed for the soils, reflect the soils' water-retention capacities. (author) 3 refs, 5 figs

  13. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    Science.gov (United States)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  14. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  15. Pressure Stimulated Currents (PSCin marble samples

    Directory of Open Access Journals (Sweden)

    F. Vallianatos

    2004-06-01

    Full Text Available The electrical behaviour of marble samples from Penteli Mountain was studied while they were subjected to uniaxial stress. The application of consecutive impulsive variations of uniaxial stress to thirty connatural samples produced Pressure Stimulated Currents (PSC. The linear relationship between the recorded PSC and the applied variation rate was investigated. The main results are the following: as far as the samples were under pressure corresponding to their elastic region, the maximum PSC value obeyed a linear law with respect to pressure variation. In the plastic region deviations were observed which were due to variations of Young s modulus. Furthermore, a special burst form of PSC recordings during failure is presented. The latter is emitted when irregular longitudinal splitting is observed during failure.

  16. Nonlinearity and thresholds in dose-response relationships for carcinogenicity due to sampling variation, logarithmic dose scaling, or small differences in individual susceptibility

    International Nuclear Information System (INIS)

    Lutz, W.K.; Gaylor, D.W.; Conolly, R.B.; Lutz, R.W.

    2005-01-01

    Nonlinear and threshold-like shapes of dose-response curves are often observed in tests for carcinogenicity. Here, we present three examples where an apparent threshold is spurious and can be misleading for low dose extrapolation and human cancer risk assessment. Case 1: For experiments that are not replicated, such as rodent bioassays for carcinogenicity, random variation can lead to misinterpretation of the result. This situation was simulated by 20 random binomial samplings of 50 animals per group, assuming a true linear dose response from 5% to 25% tumor incidence at arbitrary dose levels 0, 0.5, 1, 2, and 4. Linearity was suggested only by 8 of the 20 simulations. Four simulations did not reveal the carcinogenicity at all. Three exhibited thresholds, two showed a nonmonotonic behavior with a decrease at low dose, followed by a significant increase at high dose ('hormesis'). Case 2: Logarithmic representation of the dose axis transforms a straight line into a sublinear (up-bent) curve, which can be misinterpreted to indicate a threshold. This is most pronounced if the dose scale includes a wide low dose range. Linear regression of net tumor incidences and intersection with the dose axis results in an apparent threshold, even with an underlying true linear dose-incidence relationship. Case 3: Nonlinear shapes of dose-cancer incidence curves are rarely seen with epidemiological data in humans. The discrepancy to data in rodents may in part be explained by a wider span of individual susceptibilities for tumor induction in humans due to more diverse genetic background and modulation by co-carcinogenic lifestyle factors. Linear extrapolation of a human cancer risk could therefore be appropriate even if animal bioassays show nonlinearity

  17. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  18. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  19. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  20. Securing maximum diversity of Non Pollen Palynomorphs in palynological samples

    DEFF Research Database (Denmark)

    Enevold, Renée; Odgaard, Bent Vad

    2015-01-01

    Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important to the interpreta......Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important...

  1. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  2. Analysis of the variation of the activity of a "9"9"mTc sample after dilution with saline solution

    International Nuclear Information System (INIS)

    Kuahara, L.T.; Correa, E.L.; Potiens, M.P.A.

    2016-01-01

    The activity meter is essential equipment in nuclear medicine services.To ensure its good operation and know the factors which may influence its readings is vital for the activity administered to the patient be correct. Many factors may influence the activity meter accuracy, such as the type of container, geometry, and radioactive material volume. The aim of this study was to analyze the measurements variations in 0.5 ml and 1.0 ml of "9"9"mTc pure and diluted in 2.5 ml of saline solution, in containers used in nuclear medicine. Variations of up to 4 % in measured values were found. (author)

  3. School-Level Genetic Variation Predicts School-Level Verbal IQ Scores: Results from a Sample of American Middle and High Schools

    Science.gov (United States)

    Beaver, Kevin M.; Wright, John Paul

    2011-01-01

    Research has consistently revealed that average IQ scores vary significantly across macro-level units, such as states and nations. The reason for this variation in IQ, however, has remained at the center of much controversy. One of the more provocative explanations is that IQ across macro-level units is the result of genetic differences, but…

  4. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  5. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  6. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  7. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  8. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  9. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  10. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  11. Dependence of critical current on sample length analyzed by the variation of local critical current bent of BSCCO superconducting composite tape

    International Nuclear Information System (INIS)

    Matsubayashi, H.; Mukai, Y.; Shin, J.K.; Ochiai, S.; Okuda, H.; Osamura, K.; Otto, A.; Malozemoff, A.

    2008-01-01

    Using the high critical current type BSCCO composite tape fabricated at American Superconductor Corporation, the relation of overall critical current to the distribution of local critical current and the dependence of overall critical current on sample length of the bent samples were studied experimentally and analytically. The measured overall critical current was described well from the distribution of local critical current and n-value of the constituting short elements, by regarding the overall sample to be composed of local series circuits and applying the voltage summation model. Also the dependence of overall critical current on sample length could be reproduced in computer satisfactorily by the proposed simulation method

  12. Standardization and optimization of core sampling procedure for carbon isotope analysis in eucalyptus and variation in carbon isotope ratios across species and growth conditions

    CSIR Research Space (South Africa)

    Raju, M

    2011-11-01

    Full Text Available C Aspect grandis urophylla Variation in D13C 16.000 16.500 17.000 17.500 18.000 18.500 19.000 19.500 20.000 20.500 E. camal E urophylla E grandis E pellita E globulus D1 3C Variable N Level of significance Species 2 P<0...

  13. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  14. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  15. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  16. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  17. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  18. Effect of the grain size of the soil on the measured activity and variation in activity in surface and subsurface soil samples

    International Nuclear Information System (INIS)

    Sulaiti, H.A.; Rega, P.H.; Bradley, D.; Dahan, N.A.; Mugren, K.A.; Dosari, M.A.

    2014-01-01

    Correlation between grain size and activity concentrations of soils and concentrations of various radionuclides in surface and subsurface soils has been measured for samples taken in the State of Qatar by gamma-spectroscopy using a high purity germanium detector. From the obtained gamma-ray spectra, the activity concentrations of the 238U (226Ra) and /sup 232/ Th (/sup 228/ Ac) natural decay series, the long-lived naturally occurring radionuclide 40 K and the fission product radionuclide 137CS have been determined. Gamma dose rate, radium equivalent, radiation hazard index and annual effective dose rates have also been estimated from these data. In order to observe the effect of grain size on the radioactivity of soil, three grain sizes were used i.e., smaller than 0.5 mm; smaller than 1 mm and greater than 0.5 mm; and smaller than 2 mm and greater than 1 mm. The weighted activity concentrations of the 238U series nuclides in 0.5-2 mm grain size of sample numbers was found to vary from 2.5:f:0.2 to 28.5+-0.5 Bq/kg, whereas, the weighted activity concentration of 4 degree K varied from 21+-4 to 188+-10 Bq/kg. The weighted activity concentrations of 238U series and 4 degree K have been found to be higher in the finest grain size. However, for the 232Th series, the activity concentrations in the 1-2 mm grain size of one sample were found to be higher than in the 0.5-1 mm grain size. In the study of surface and subsurface soil samples, the activity concentration levels of 238 U series have been found to range from 15.9+-0.3 to 24.1+-0.9 Bq/kg, in the surface soil samples (0-5 cm) and 14.5+-0.3 to 23.6+-0.5 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 232Th series have been found to lie in the range 5.7+-0.2 to 13.7+-0.5 Bq/kg, in the surface soil samples (0-5 cm)and 4.1+-0.2 to 15.6+-0.3 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 4 degree K were in the range 150+-8 to 290+-17 Bq/kg, in the surface

  19. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  20. Sample preparation techniques for the determination of natural 15N/14N variations in amino acids by gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS).

    Science.gov (United States)

    Hofmann, D; Gehre, M; Jung, K

    2003-09-01

    In order to identify natural nitrogen isotope variations of biologically important amino acids four derivatization reactions (t-butylmethylsilylation, esterification with subsequent trifluoroacetylation, acetylation and pivaloylation) were tested with standard mixtures of 17 proteinogenic amino acids and plant (moss) samples using GC-C-IRMS. The possible fractionation of the nitrogen isotopes, caused for instance by the formation of multiple reaction products, was investigated. For biological samples, the esterification of the amino acids with subsequent trifluoroacetylation is recommended for nitrogen isotope ratio analysis. A sample preparation technique is described for the isotope ratio mass spectrometric analysis of amino acids from the non-protein (NPN) fraction of terrestrial moss. 14N/15N ratios from moss (Scleropodium spec.) samples from different anthropogenically polluted areas were studied with respect to ecotoxicologal bioindication.

  1. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  2. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  3. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  4. Interspecific variation in prey capture behavior by co-occurring Nepenthes pitcher plants: evidence for resource partitioning or sampling-scheme artifacts?

    Science.gov (United States)

    Chin, Lijin; Chung, Arthur Y C; Clarke, Charles

    2014-01-01

    Pitcher plants of the genus Nepenthes capture a wide range of arthropod prey for nutritional benefit, using complex combinations of visual and olfactory signals and gravity-driven pitfall trapping mechanisms. In many localities throughout Southeast Asia, several Nepenthes different species occur in mixed populations. Often, the species present at any given location have strongly divergent trap structures and preliminary surveys indicate that different species trap different combinations of arthropod prey, even when growing at the same locality. On this basis, it has been proposed that co-existing Nepenthes species may be engaged in niche segregation with regards to arthropod prey, avoiding direct competition with congeners by deploying traps that have modifications that enable them to target specific prey types. We examined prey capture among 3 multi-species Nepenthes populations in Borneo, finding that co-existing Nepenthes species do capture different combinations of prey, but that significant interspecific variations in arthropod prey combinations can often be detected only at sub-ordinal taxonomic ranks. In all lowland Nepenthes species examined, the dominant prey taxon is Formicidae, but montane Nepenthes trap few (or no) ants and 2 of the 3 species studied have evolved to target alternative sources of nutrition, such as tree shrew feces. Using similarity and null model analyses, we detected evidence for niche segregation with regards to formicid prey among 5 lowland, sympatric Nepenthes species in Sarawak. However, we were unable to determine whether these results provide support for the niche segregation hypothesis, or whether they simply reflect unquantified variation in heterogeneous habitats and/or ant communities in the study sites. These findings are used to propose improvements to the design of field experiments that seek to test hypotheses about targeted prey capture patterns in Nepenthes.

  5. Biological investigations of Indian phaeophyceae: 17. Seasonal variation of antibacterial activity of total sterols obtained from frozen samples of Sargassum johnstonii Setchell et Gardner

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, P.P.S.

    . The extracts from June to October months showed antibacterial activity while the samples from November to January and May did not show any activity over the test bacteria Proteus vulgaris a gram-negative organism which has shown high sensitivity towards...

  6. Spatial and temporal variations in cadmium concentrations and burdens in the Pacific oyster (Crassostrea gigas) sampled from the Pacific north-west.

    Science.gov (United States)

    Bendell, Leah I; Feng, Cindy

    2009-08-01

    Oysters from the north-west coast of Canada contain high levels of cadmium, a toxic metal, in amounts that exceed food safety guidelines for international markets. A first required step to determine the sources of cadmium is to identify possible spatial and temporal trends in the accumulation of cadmium by the oyster. To meet this objective, rather than sample wild and cultured oysters of unknown age and origin, an oyster "grow-out" experiment was initiated. Cultured oyster seed was suspended in the water column up to a depth of 7 m and the oyster seed allowed to mature a period of 3 years until market size. Oysters were sampled bimonthly and at time of sampling, temperature, chlorophyll-a, turbidity and salinity were measured. Oyster total shell length, dry tissue weights, cadmium concentrations (microg g(-1)) and burdens (microg of cadmium oyster(-1)) were determined. Oyster cadmium concentrations and burdens were then interpreted with respect to the spatial and temporal sampling design as well as to the measured physio-chemical and biotic variables. When expressed as a concentration, there was a marked seasonality with concentrations being greater in winter as compared in summer; however no spatial trend was evident. When expressed as a burden which corrects for differences in tissue mass, there was no seasonality, however cadmium oyster burdens increased from south to north. Comparison of cadmium accumulation rates oyster(-1) among sites indicated three locations, Webster Island, on the west side of Vancouver Island, and two within Desolation Sound, Teakerne Arm and Redonda Bay, where point sources of cadmium which are not present at all other sampling locations may be contributing to overall oyster cadmium burdens. Of the four physio-chemical factors measured only temperature and turbidity weakly correlated with tissue cadmium concentrations (r(2)=-0.13; p<0.05). By expressing oyster cadmium both as concentration and burden, regional and temporal patterns were

  7. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  8. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  9. Work Element B: 157. Sampling in Fish-Bearing Reaches [Variation in Productivity in Headwater Reaches of the Wenatchee Subbasin], Final Report for PNW Research Station.

    Energy Technology Data Exchange (ETDEWEB)

    Polivka, Karl; Bennett, Rita L. [USDA Forest Service, Pacific Northwest Research Station, Wenatchee, WA

    2009-03-31

    We studied variation in productivity in headwater reaches of the Wenatchee subbasin for multiple field seasons with the objective that we could develop methods for monitoring headwater stream conditions at the subcatchment and stream levels, assign a landscape-scale context via the effects of geoclimatic parameters on biological productivity (macroinvertebrates and fish) and use this information to identify how variability in productivity measured in fishless headwaters is transmitted to fish communities in downstream habitats. In 2008, we addressed this final objective. In collaboration with the University of Alaska Fairbanks we found some broad differences in the production of aquatic macroinvertebrates and in fish abundance across categories that combine the effects of climate and management intensity within the subbasin (ecoregions). From a monitoring standpoint, production of benthic macroinvertebrates was not a good predictor of drifting macroinvertebrates and therefore might be a poor predictor of food resources available to fish. Indeed, there is occasionally a correlation between drifting macroinvertebrate abundance and fish abundance which suggests that headwater-derived resources are important. However, fish in the headwaters appeared to be strongly food-limited and there was no evidence that fishless headwaters provided a consistent subsidy to fish in reaches downstream. Fish abundance and population dynamics in first order headwaters may be linked with similar metrics further down the watershed. The relative strength of local dynamics and inputs into productivity may be constrained or augmented by large-scale biogeoclimatic control. Headwater streams are nested within watersheds, which are in turn nested within ecological subregions; thus, we hypothesized that local effects would not necessarily be mutually exclusive from large-scale influence. To test this we examined the density of primarily salmonid fishes at several spatial and temporal scales

  10. Spatial and temporal variations in cadmium concentrations and burdens in the Pacific oyster (Crassostrea gigas) sampled from the Pacific north-west

    International Nuclear Information System (INIS)

    Bendell, Leah I.; Feng, Cindy

    2009-01-01

    Oysters from the north-west coast of Canada contain high levels of cadmium, a toxic metal, in amounts that exceed food safety guidelines for international markets. A first required step to determine the sources of cadmium is to identify possible spatial and temporal trends in the accumulation of cadmium by the oyster. To meet this objective, rather than sample wild and cultured oysters of unknown age and origin, an oyster 'grow-out' experiment was initiated. Cultured oyster seed was suspended in the water column up to a depth of 7 m and the oyster seed allowed to mature a period of 3 years until market size. Oysters were sampled bimonthly and at time of sampling, temperature, chlorophyll-a, turbidity and salinity were measured. Oyster total shell length, dry tissue weights, cadmium concentrations (μg g -1 ) and burdens (μg of cadmium oyster -1 ) were determined. Oyster cadmium concentrations and burdens were then interpreted with respect to the spatial and temporal sampling design as well as to the measured physio-chemical and biotic variables. When expressed as a concentration, there was a marked seasonality with concentrations being greater in winter as compared in summer; however no spatial trend was evident. When expressed as a burden which corrects for differences in tissue mass, there was no seasonality, however cadmium oyster burdens increased from south to north. Comparison of cadmium accumulation rates oyster -1 among sites indicated three locations, Webster Island, on the west side of Vancouver Island, and two within Desolation Sound, Teakerne Arm and Redonda Bay, where point sources of cadmium which are not present at all other sampling locations may be contributing to overall oyster cadmium burdens. Of the four physio-chemical factors measured only temperature and turbidity weakly correlated with tissue cadmium concentrations (r 2 = -0.13; p < 0.05). By expressing oyster cadmium both as concentration and burden, regional and temporal patterns were

  11. Variation of calcium, copper and iron levels in serum, bile and stone samples of patients having different types of gallstone: A comparative study.

    Science.gov (United States)

    Khan, Mustafa; Kazi, Tasneem Gul; Afridi, Hassan Imran; Sirajuddin; Bilal, Muhammad; Akhtar, Asma; Khan, Sabir; Kadar, Salma

    2017-08-01

    Epidemiological data among the human population has shown a significantly increased incidence of gallstone (GS) disease worldwide. It was studied that some essential (calcium) and transition elements (iron and copper) in bile play an important role in the development of GS. The estimation of calcium, copper and iron were carried out in the serum, gall bladder bile and different types of GS (cholesterol, mixed and pigmented) of 172 patients, age ranged 20-55years. For comparative purpose age matched referents not suffering from GS diseases were also selected. Biliary concentrations of calcium (Ca), iron (Fe) and copper (Cu) were correlated with their concentrations in serum and different types of GS samples. The ratio of Ca, Fe and Cu in bile with serum was also calculated. Understudy metals were determined by flame atomic absorption spectroscopy after acid decomposition of matrices of selected samples. The Ca concentrations in serum samples were significantly higher in patients with pigmented GS as compared to controls (p0.001). The contents of Cu and Fe in serum and bile of all patients (except female cholesterol GS patient have low serum iron concentration) were found to be higher than control, but difference was significant in those patients who have pigmented GS. The concentration of Ca, Fe and Cu in different types GS were found in the order, Pigmented>mixed>cholesterol. The bile/serum ratio for Ca, Cu and Fe was found to be significantly higher in pigmented GS patients. Gall bladder bile was slightly alkaline in patients as compared to referents. The density of bile was found to be higher in patients as compared to the referents. Various functional groups present in different types of GS samples were confirmed by Fourier transform infra-red spectroscopy. The higher density and pH of bile, elevated concentrations of transition elements in all types of biological samples (serum, bile and GS), could be an important factor for the formation of different types of

  12. Hourly elemental concentrations in PM2.5 aerosols sampled simultaneously at urban background and road site during SAPUSS – diurnal variations and PMF receptor modelling

    Directory of Open Access Journals (Sweden)

    M. Dall'Osto

    2013-04-01

    Full Text Available Hourly-resolved aerosol chemical speciation data can be a highly powerful tool to determine the source origin of atmospheric pollutants in urban environments. Aerosol mass concentrations of seventeen elements (Na, Mg, Al, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Sr and Pb were obtained by time (1 h and size (PM2.5 particulate matter 2.5 mass fraction simultaneously measured at the UB and RS sites: (1 the regional aerosol sources impact both monitoring sites at similar concentrations regardless their different ventilation conditions; (2 by contrast, local industrial aerosol plumes associated with shipping oil combustion and smelters activities have a higher impact on the more ventilated UB site; (3 a unique source of Pb-Cl (associated with combustion emissions is found to be the major (82% source of fine Cl in the urban agglomerate; (4 the mean diurnal variation of PM2.5 primary traffic non-exhaust brake dust (Fe-Cu suggests that this source is mainly emitted and not resuspended, whereas PM2.5 urban dust (Ca is found mainly resuspended by both traffic vortex and sea breeze; (5 urban dust (Ca is found the aerosol source most affected by land wetness, reduced by a factor of eight during rainy days and suggesting that wet roads may be a solution for reducing urban dust concentrations.

  13. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  14. Study of a sample of faint Be stars in the exofield of CoRoT. II. Pulsation and outburst events: Time series analysis of photometric variations

    Science.gov (United States)

    Semaan, T.; Hubert, A. M.; Zorec, J.; Gutiérrez-Soto, J.; Frémat, Y.; Martayan, C.; Fabregat, J.; Eggenberger, P.

    2018-06-01

    Context. The class of Be stars are the epitome of rapid rotators in the main sequence. These stars are privileged candidates for studying the incidence of rotation on the stellar internal structure and on non-radial pulsations. Pulsations are considered possible mechanisms to trigger mass-ejection phenomena required to build up the circumstellar disks of Be stars. Aims: Time series analyses of the light curves of 15 faint Be stars observed with the CoRoT satellite were performed to obtain the distribution of non-radial pulsation (NRP) frequencies in their power spectra at epochs with and without light outbursts and to discriminate pulsations from rotation-related photometric variations. Methods: Standard Fourier techniques were employed to analyze the CoRoT light curves. Fundamental parameters corrected for rapid-rotation effects were used to study the power spectrum as a function of the stellar location in the instability domains of the Hertzsprung-Russell (H-R) diagram. Results: Frequencies are concentrated in separate groups as predicted for g-modes in rapid B-type rotators, except for the two stars that are outside the H-R instability domain. In five objects the variations in the power spectrum are correlated with the time-dependent outbursts characteristics. Time-frequency analysis showed that during the outbursts the amplitudes of stable main frequencies within 0.03 c d-1 intervals strongly change, while transients and/or frequencies of low amplitude appear separated or not separated from the stellar frequencies. The frequency patterns and activities depend on evolution phases: (i) the average separations between groups of frequencies are larger in the zero-age main sequence (ZAMS) than in the terminal age main sequence (TAMS) and are the largest in the middle of the MS phase; (ii) a poor frequency spectrum with f ≲ 1 cd-1 of low amplitude characterizes the stars beyond the TAMS; and (iii) outbursts are seen in stars hotter than B4 spectral type and in the

  15. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  16. Hourly elemental concentrations in PM2.5 aerosols sampled simultaneously at urban background and road site during SAPUSS - diurnal variations and PMF receptor modelling

    Science.gov (United States)

    Dall'Osto, M.; Querol, X.; Amato, F.; Karanasiou, A.; Lucarelli, F.; Nava, S.; Calzolai, G.; Chiari, M.

    2013-04-01

    combustion emissions) is found to be the major (82%) source of fine Cl in the urban agglomerate; (4) the mean diurnal variation of PM2.5 primary traffic non-exhaust brake dust (Fe-Cu) suggests that this source is mainly emitted and not resuspended, whereas PM2.5 urban dust (Ca) is found mainly resuspended by both traffic vortex and sea breeze; (5) urban dust (Ca) is found the aerosol source most affected by land wetness, reduced by a factor of eight during rainy days and suggesting that wet roads may be a solution for reducing urban dust concentrations.

  17. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  18. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  19. Variation in Bluetongue virus real-time reverse transcription polymerase chain reaction assay results in blood samples of sheep, cattle, and alpaca.

    Science.gov (United States)

    Brito, Barbara P; Gardner, Ian A; Hietala, Sharon K; Crossley, Beate M

    2011-07-01

    Bluetongue is a vector-borne viral disease that affects domestic and wild ruminants. The epidemiology of this disease has recently changed, with occurrence in new geographic areas. Various real-time quantitative reverse transcription polymerase chain reaction (real-time qRT-PCR) assays are used to detect Bluetongue virus (BTV); however, the impact of biologic differences between New World camelids and domestic ruminant samples on PCR efficiency, for which the BTV real-time qRT-PCR was initially validated are unknown. New world camelids are known to have important biologic differences in whole blood composition, including hemoglobin concentration, which can alter PCR performance. In the present study, sheep, cattle, and alpaca blood were spiked with BTV serotypes 10, 11, 13, and 17 and analyzed in 10-fold dilutions by real-time qRT-PCR to determine if species affected nucleic acid recovery and assay performance. A separate experiment was performed using spiked alpaca blood subsequently diluted in 10-fold series in sheep blood to assess the influence of alpaca blood on performance efficiency of the BTV real-time qRT-PCR assay. Results showed that BTV-specific nucleic acid detection from alpaca blood was consistently 1-2 logs lower than from sheep and cattle blood, and results were similar for each of the 4 BTV serotypes analyzed.

  20. Fluctuations of Attentional Networks and Default Mode Network during the Resting State Reflect Variations in Cognitive States: Evidence from a Novel Resting-state Experience Sampling Method.

    Science.gov (United States)

    Van Calster, Laurens; D'Argembeau, Arnaud; Salmon, Eric; Peters, Frédéric; Majerus, Steve

    2017-01-01

    Neuroimaging studies have revealed the recruitment of a range of neural networks during the resting state, which might reflect a variety of cognitive experiences and processes occurring in an individual's mind. In this study, we focused on the default mode network (DMN) and attentional networks and investigated their association with distinct mental states when participants are not performing an explicit task. To investigate the range of possible cognitive experiences more directly, this study proposes a novel method of resting-state fMRI experience sampling, informed by a phenomenological investigation of the fluctuation of mental states during the resting state. We hypothesized that DMN activity would increase as a function of internal mentation and that the activity of dorsal and ventral networks would indicate states of top-down versus bottom-up attention at rest. Results showed that dorsal attention network activity fluctuated as a function of subjective reports of attentional control, providing evidence that activity of this network reflects the perceived recruitment of controlled attentional processes during spontaneous cognition. Activity of the DMN increased when participants reported to be in a subjective state of internal mentation, but not when they reported to be in a state of perception. This study provides direct evidence for a link between fluctuations of resting-state neural activity and fluctuations in specific cognitive processes.

  1. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  2. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  3. Using ecological momentary assessment to investigate short-term variations in sexual functioning in a sample of peri-menopausal women from Iran.

    Directory of Open Access Journals (Sweden)

    Amir H Pakpour

    Full Text Available The investigation of short-term changes in female sexual functioning has received little attention so far. The aims of the study were to gain empirical knowledge on within-subject and within- and across-variable fluctuations in women's sexual functioning over time. More specifically, to investigate the stability of women´s self-reported sexual functioning and the moderating effects of contextual and interpersonal factors. A convenience sample of 206 women, recruited across eight Health care Clinics in Rasht, Iran. Ecological momentary assessment was used to examine fluctuations of sexual functioning over a six week period. A shortened version of the Female Sexual Function Index (FSFI was applied to assess sexual functioning. Self-constructed questions were included to assess relationship satisfaction, partner's sexual performance and stress levels. Mixed linear two-level model analyses revealed a link between orgasm and relationship satisfaction (Beta = 0.125, P = 0.074 with this link varying significantly between women. Analyses further revealed a significant negative association between stress and all six domains of women's sexual functioning. Women not only reported differing levels of stress over the course of the assessment period, but further differed from each other in how much stress they experienced and how much this influenced their sexual response. Orgasm and sexual satisfaction were both significantly associated with all other domains of sexual function (P<0.001. And finally, a link between partner performance and all domains of women`s sexual functioning (P<0.001 could be detected. Except for lubrication (P = 0.717, relationship satisfaction had a significant effect on all domains of the sexual response (P<0.001. Overall, our findings support the new group of criteria introduced in the DSM-5, called "associated features" such as partner factors and relationship factors. Consideration of these criteria is important and necessary for

  4. Process variation in electron beam sterilization

    International Nuclear Information System (INIS)

    Beck, Jeffrey A.

    2012-01-01

    The qualification and control of electron beam sterilization can be improved by the application of proven statistical analysis techniques such as Analysis of Variance (ANOVA) and Statistical Tolerance Limits. These statistical techniques can be useful tools in: •Locating and quantifying the minimum and maximum absorbed dose in a product. •Estimating the expected process maximum dose, given a minimum sterilizing dose. •Setting a process minimum dose target, based on an allowance for random measurement and process variation. •Determining the dose relationship between a reference dosimeter and process minimum and maximum doses. This study investigates and demonstrates the application of these tools in qualifying electron beam sterilization, and compares the conclusions obtained with those obtained using practices recommended in Guide for Process Control in Radiation Sterilization. The study supports the following conclusions for electron beam processes: 1.ANOVA is a more effective tool for evaluating the equivalency of absorbed doses than methods suggested in . 2.Process limits computed using statistical tolerance limits more accurately reflect actual process variability than the AAMI method, which applies +/−2 sample standard deviations (s) regardless of sample size. 3.The use of reference dose ratios lends itself to qualification using statistical tolerance limits. The current AAMI recommended approach may result in an overly optimistic estimate of the reference dose adjustment factor, as it is based on application of +/−2(s) tolerances regardless of sample size.

  5. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  6. Collateral variations between the concentrations of mercury and other water soluble ions in volcanic ash samples and volcanic activity during the 2014-2016 eruptive episodes at Aso volcano, Japan

    Science.gov (United States)

    Marumoto, Kohji; Sudo, Yasuaki; Nagamatsu, Yoshizumi

    2017-07-01

    During 2014-2016, the Aso volcano, located in the center of the Kyushu Islands, Japan, erupted and emitted large amounts of volcanic gases and ash. Two episodes of the eruption were observed; firstly Strombolian magmatic eruptive episodes from 25 November 2014 to the middle of May 2015, and secondly phreatomagmatic and phreatic eruptive episodes from September 2015 to February 2016. Bulk chemical analyses on total mercury (Hg) and major ions in water soluble fraction in volcanic ash fall samples were conducted. During the Strombolian magmatic eruptive episodes, total Hg concentrations averaged 1.69 ± 0.87 ng g- 1 (N = 33), with a range from 0.47 to 3.8 ng g- 1. In addition, the temporal variation of total Hg concentrations in volcanic ash varied with the amplitude change of seismic signals. In the Aso volcano, the volcanic tremors are always observed during eruptive stages and quiet interludes, and the amplitudes of tremors increase at eruptive stages. So, the temporal variation of total Hg concentrations could provide an indication of the level of volcanic activity. During the phreatomagmatic and phreatic eruptive episodes, on the other hand, total Hg concentrations in the volcanic ash fall samples averaged 220 ± 88 ng g- 1 (N = 5), corresponding to 100 times higher than those during the Strombolian eruptive episode. Therefore, it is possible that total Hg concentrations in volcanic ash samples are largely varied depending on the eruptive type. In addition, the ash fall amounts were also largely different among the two eruptive episodes. This can be also one of the factors controlling Hg concentrations in volcanic ash.

  7. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  8. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  9. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  10. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  11. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  12. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  13. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  14. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  15. Variation in commercial smoking mixtures containing third-generation synthetic cannabinoids.

    Science.gov (United States)

    Frinculescu, Anca; Lyall, Catherine L; Ramsey, John; Miserez, Bram

    2017-02-01

    Variation in ingredients (qualitative variation) and in quantity of active compounds (quantitative variation) in herbal smoking mixtures containing synthetic cannabinoids has been shown for older products. This can be dangerous to the user, as accurate and reproducible dosing is impossible. In this study, 69 packages containing third-generation cannabinoids of seven brands on the UK market in 2014 were analyzed both qualitatively and quantitatively for variation. When comparing the labels to actual active ingredients identified in the sample, only one brand was shown to be correctly labelled. The other six brands contained less, more, or ingredients other than those listed on the label. Only two brands were inconsistent, containing different active ingredients in different samples. Quantitative variation was assessed both within one package and between several packages. Within-package variation was within a 10% range for five of the seven brands, but two brands showed larger variation, up to 25% (Relative Standard Deviation). Variation between packages was significantly higher, with variation up to 38% and maximum concentration up to 2.7 times higher than the minimum concentration. Both qualitative and quantitative variation are common in smoking mixtures and endanger the user, as it is impossible to estimate the dose or to know the compound consumed when smoking commercial mixtures. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  17. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  18. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  19. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  20. Optimal Control of Polymer Flooding Based on Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  1. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  2. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  3. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  4. Attitude sensor alignment calibration for the solar maximum mission

    Science.gov (United States)

    Pitone, Daniel S.; Shuster, Malcolm D.

    1990-01-01

    An earlier heuristic study of the fine attitude sensors for the Solar Maximum Mission (SMM) revealed a temperature dependence of the alignment about the yaw axis of the pair of fixed-head star trackers relative to the fine pointing Sun sensor. Here, new sensor alignment algorithms which better quantify the dependence of the alignments on the temperature are developed and applied to the SMM data. Comparison with the results from the previous study reveals the limitations of the heuristic approach. In addition, some of the basic assumptions made in the prelaunch analysis of the alignments of the SMM are examined. The results of this work have important consequences for future missions with stringent attitude requirements and where misalignment variations due to variations in the temperature will be significant.

  5. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  6. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  7. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  8. Evolutionary History Underlies Plant Physiological Responses to Global Change Since the Last Glacial Maximum

    Science.gov (United States)

    Becklin, K. M.; Medeiros, J. S.; Sale, K. R.; Ward, J. K.

    2014-12-01

    Assessing family and species-level variation in physiological responses to global change across geologic time is critical for understanding factors that underlie changes in species distributions and community composition. Ancient plant specimens preserved within packrat middens are invaluable in this context since they allow for comparisons between co-occurring plant lineages. Here we used modern and ancient plant specimens preserved within packrat middens from the Snake Range, NV to investigate the physiological responses of a mixed montane conifer community to global change since the last glacial maximum. We used a conceptual model to infer relative changes in stomatal conductance and maximum photosynthetic capacity from measures of leaf carbon isotopes, stomatal characteristics, and leaf nitrogen content. Our results indicate that most of the sampled taxa decreased stomatal conductance and/or photosynthetic capacity from glacial to modern times. However, plant families differed in the timing and magnitude of these physiological responses. Additionally, leaf-level responses were more similar within plant families than within co-occurring species assemblages. This suggests that adaptation at the level of leaf physiology may not be the main determinant of shifts in community composition, and that plant evolutionary history may drive physiological adaptation to global change over recent geologic time.

  9. Variational principles

    CERN Document Server

    Moiseiwitsch, B L

    2004-01-01

    This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha

  10. Ensembl variation resources

    Directory of Open Access Journals (Sweden)

    Marin-Garcia Pablo

    2010-05-01

    Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.

  11. Contribution to the study of maximum levels for liquid radioactive waste disposal into continental and sea water. Treatment of some typical samples; Contribution a l'etude des niveaux limites relatifs a des rejets d'effluents radioactifs liquides dans les eaux continentales et oceaniques. Traitement de quelques exemples types

    Energy Technology Data Exchange (ETDEWEB)

    Bittel, R; Mancel, J [Commissariat a l' Energie Atomique, 92 - Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire

    1968-10-01

    The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [French] Les vecteurs essentiels de la contamination radioactive de l'homme sont les aliments dans leur ensemble, et non seulement l'eau ingeree ou l'air inhale. C'est pourquoi, en accord avec l'esprit des recentes recommandations de la C.I.P.R., il est propose de substituer aux CMA la notion de niveaux limites de contamination des eaux. Dans le cas des chaines alimentaires aquatiques (organismes aquatiques et aliments irrigues), la connaissance des quantites ingerees et celle des facteurs de concentration aliments/eau permettent de determiner ces niveaux limites dans le cas de deux vecteurs primaires de contamination (eaux continentales et eaux oceaniques). Les notions de regime alimentaire critique, de radioelement critique et de formule de rejets sont envisagees, dans le meme esprit, avec le souci de tenir compte le plus possible des situations locales. (auteurs)

  12. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  14. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  15. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  16. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  17. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  18. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  19. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  20. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  1. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  2. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  3. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  4. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  5. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  6. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  7. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  8. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  9. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  10. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  11. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  12. Seasonal variation of technetium-99 in Fucus vesiculosus and its application as an oceanographic tracer

    DEFF Research Database (Denmark)

    Shi, Keliang; Hou, Xiaolin; Roos, Per

    2013-01-01

    The concentration of 99Tc was determined in archived time series seaweed samples collected at Klint (Denmark). The results demonstrate a significantly seasonal variation of 99Tc concentrations in Fucus vesiculosus with maximum values in winter and minimum values in summer. The mechanism driving t...... of (1.9 0.5) 105 L/kg, were obtained. This indicates that F. vesiculosus can be used as a reliable bioindicator to monitor 99Tc concentration in seawater....

  13. Dynamic Performance of Maximum Power Point Trackers in TEG Systems Under Rapidly Changing Temperature Conditions

    Science.gov (United States)

    Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.

    2016-03-01

    Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG

  14. Maximum Power Point Tracking Using Sliding Mode Control for Photovoltaic Array

    Directory of Open Access Journals (Sweden)

    J. Ghazanfari

    2013-09-01

    Full Text Available In this paper, a robust Maximum Power Point Tracking (MPPT for PV array has been proposed using sliding mode control by defining a new formulation for sliding surface which is based on increment conductance (INC method. The stability and robustness of the proposed controller are investigated to load variations and environment changes. Three different types of DC-DC converter are used in Maximum Power Point (MPP system and the results obtained are given. The simulation results confirm the effectiveness of the proposed method in the presence of load variations and environment changes for different types of DC-DC converter topologies.

  15. Metaleptic Variations

    OpenAIRE

    Pernot, Dominique

    2014-01-01

    Les derniers romans de Gabriel Josipovici offrent beaucoup de variété, allant de la parodie, de la fiction comique légère, dans Only Joking et Making Mistakes, à des sujets plus graves, plus personnels, ontologiques. Dans un court roman, Everything Passes, et dans un roman majeur, Goldberg: Variations, le lecteur est amené à se poser des questions sur la nature mystérieuse de la réalité, qui est, trop souvent, acceptée sans conteste par de nombreux roma...

  16. Concepciones y creencias sobre el trabajo: Estudio descriptivo de algunas fuentes de variación en una muestra laboralmente activa Work Conceptions And Believes: A Descriptive Study On Some Sources Of Variation In A Working Sample

    Directory of Open Access Journals (Sweden)

    Elena Zubieta

    2008-12-01

    Full Text Available Desde una mirada psicosocial, el trabajo consiste en un conjunto de creencias y valores hacia el trabajo que los individuos y grupos sociales desarrollan antes y durante el proceso de socialización en el trabajo. Se trata de un conjunto flexible de cogniciones que está sujeto a cambios dependiendo de las vivencias personales y los cambios contextuales (Salanova, Gracia & Peiró;1996. Desde la perspectiva de la socialización en el trabajo y con el objetivo de explorar en probables fuentes de variación a partir de variables sociodemográficas, contextuales y psicosociales, se desarrolló un estudio descriptivo de diferencias de grupo sobre la base de una muestra no probabilística intencional por cuotas compuesta por 290 sujetos activos laboralmente de la Ciudad de Buenos Aires y el Conurbano Bonaerense. Los resultados muestran la presencia de creencias asociadas a la Ética Protestante del Trabajo y la Competitividad, valores de Apertura al Cambio y Autotrascendencia y configuraciones particulares a partir de introducir variables como el sexo, la edad, el nivel de educación y aspectos de trayectoria laboral tales como años de trabajo, permanencia en la organización y el puesto, interrupciones en la actividad laboral y modalidad de trabajo.From a psycho-sociological view, work can be understood as a set of values and beliefs which individuals and groups construct before and during work process socialization. It is a flexible set of cognitions influenced by individuals personal experiences and contextual changes (Salanova, Gracia & Peiró;1996. Taking socialization at work as a starting point and with the aim of exploring variation sources in terms of sociodemographic, contextual and psycho-sociological variables, a descriptive group differences study was carried out based on a convenience sample of 290 working participants from Buenos Aires city and surroundings. Results show the presence of Protestant Work Ethic, Competitive beliefs, Self

  17. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  18. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  19. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  20. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  1. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  2. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  3. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  4. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  5. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  6. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  7. Atmospheric diurnal variations observed with GPS radio occultation soundings

    Directory of Open Access Journals (Sweden)

    F. Xie

    2010-07-01

    Full Text Available The diurnal variation, driven by solar forcing, is a fundamental mode in the Earth's weather and climate system. Radio occultation (RO measurements from the six COSMIC satellites (Constellation Observing System for Meteorology, Ionosphere and Climate provide nearly uniform global coverage with high vertical resolution, all-weather and diurnal sampling capability. This paper analyzes the diurnal variations of temperature and refractivity from three-year (2007–2009 COSMIC RO measurements in the troposphere and stratosphere between 30° S and 30° N. The RO observations reveal both propagating and trapped vertical structures of diurnal variations, including transition regions near the tropopause where data with high vertical resolution are critical. In the tropics the diurnal amplitude in refractivity shows the minimum around 14 km and increases to a local maximum around 32 km in the stratosphere. The upward propagating component of the migrating diurnal tides in the tropics is clearly captured by the GPS RO measurements, which show a downward progression in phase from stratopause to the upper troposphere with a vertical wavelength of about 25 km. At ~32 km the seasonal variation of the tidal amplitude maximizes at the opposite side of the equator relative to the solar forcing. The vertical structure of tidal amplitude shows strong seasonal variations and becomes asymmetric along the equator and tilted toward the summer hemisphere in the solstice months. Such asymmetry becomes less prominent in equinox months.

  8. A Stochastic Maximum Principle for General Mean-Field Systems

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Li, Juan; Ma, Jin

    2016-01-01

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  9. Reduced oxygen at high altitude limits maximum size.

    Science.gov (United States)

    Peck, L S; Chapelle, G

    2003-11-07

    The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).

  10. Venus atmosphere profile from a maximum entropy principle

    Directory of Open Access Journals (Sweden)

    L. N. Epele

    2007-10-01

    Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.

  11. A Stochastic Maximum Principle for General Mean-Field Systems

    Energy Technology Data Exchange (ETDEWEB)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)

    2016-12-15

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  12. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basing finite element methods on variational principles, especially if, as maximum and minimum principles, these can provide bounds and hence estimates of accuracy. The non-symmetric (and hence stationary rather than extremum principles) are seen however to play a significant role in optimisation theory. (Orig./A.B.)

  13. Experiencing variation

    DEFF Research Database (Denmark)

    Kobayashi, Sofie; Berge, Maria; Grout, Brian William Wilson

    2017-01-01

    This study contributes towards a better understanding of learning dynamics in doctoral supervision by analysing how learning opportunities are created in the interaction between supervisors and PhD students, using the notion of experiencing variation as a key to learning. Empirically, we have based...... the study on four video-recorded sessions, with four different PhD students and their supervisors, all from life sciences. Our analysis revealed that learning opportunities in the supervision sessions concerned either the content matter of research (for instance, understanding soil structure......), or the research methods— more specifically how to produce valid results. Our results illustrate how supervisors and PhD students create a space of learning together in their particular discipline by varying critical aspects of their research in their discussions. Situations where more openended research issues...

  14. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  15. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  16. Elemental composition of cosmic rays using a maximum likelihood method

    International Nuclear Information System (INIS)

    Ruddick, K.

    1996-01-01

    We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)

  17. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  18. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  19. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  20. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  1. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  2. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  3. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  4. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  5. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  6. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  7. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  8. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  10. Optimal control problems with delay, the maximum principle and necessary conditions

    NARCIS (Netherlands)

    Frankena, J.F.

    1975-01-01

    In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational

  11. On the maximum and minimum of two modified Gamma-Gamma variates with applications

    KAUST Repository

    Al-Quwaiee, Hessa; Ansari, Imran Shafique; Alouini, Mohamed-Slim

    2014-01-01

    on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii

  12. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  13. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  14. Statistical analysis of yearly series of maximum daily rainfall in Spain. Analisis estadistico de las series anuales de maximas lluvias diarias en Espaa

    Energy Technology Data Exchange (ETDEWEB)

    Ferrer Polo, J.; Ardiles Lopez, K. L. (CEDEX, Ministerio de Obras Publicas, Transportes y Medio ambiente, Madrid (Spain))

    1994-01-01

    Work on the statistical modelling of maximum daily rainfalls is presented, with a view to estimating the quantiles for different return periods. An index flood approach has been adopted in which the local quantiles are a result of rescaling a regional law using the mean of each series of values, that is utilized as a local scale factor. The annual maximum series have been taken from 1.545 meteorological stations over a 30 year period, and these have been classified into 26 regions defined according to meteorological criteria, the homogeneity of wich has been checked by means of a statistical analysis of the coefficients of variation of the samples,using the. An estimation has been made of the parameters for the following four distribution models: Two Component Extreme Value (TCEV); General Extreme Value (GEV); Log-Pearson III (LP3); and SQRT-Exponential Type Distribution of Maximum. The analysis of the quantiles obtained reveals slight differences in the results thus detracting from the importance of the model selection. The last of the above-mentioned distribution has been finally chosen, on the basis of the following: it is defined with fewer parameters it is the only that was proposed specifically for the analysis of daily rainfall maximums; it yields more conservative results than the traditional Gumbel distribution for the high return periods; and it is capable of providing a good description of the main sampling statistics concerning the right-hand tail of the distribution, a fact that has been checked with Montecarlo's simulation techniques. The choice of a distribution model with only two parameters has led to the selection of the regional coefficient of variation as the only determining parameter for the regional quantiles. This has permitted the elimination of the quantiles discontinuity of the classical regional approach, thus smoothing the values of that coefficient by means of an isoline plan on a national scale.

  15. A Maximum Entropy Approach to Loss Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marco Bee

    2013-03-01

    Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

  16. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  17. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  18. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  19. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  20. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  1. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  2. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  3. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  4. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  5. 680 SPATIAL VARIATION IN GROUNDWATER POLLUTION BY ...

    African Journals Online (AJOL)

    Osondu

    higher in Group A water samples, and reduced slightly in the Group B and then the Group C samples, ... Keywords: Spatial variation, Groundwater, Pollution, Abattoir, Effluents, Water quality. ... situation which may likely pose a threat to the.

  6. Diurnal variations of Titan

    Science.gov (United States)

    Cui, J.; Galand, M.; Yelle, R. V.; Vuitton, V.; Wahlund, J.-E.; Lavvas, P. P.; Mueller-Wodarg, I. C. F.; Kasprzak, W. T.; Waite, J. H.

    2009-04-01

    We present our analysis of the diurnal variations of Titan's ionosphere (between 1,000 and 1,400 km) based on a sample of Ion Neutral Mass Spectrometer (INMS) measurements in the Open Source Ion (OSI) mode obtained from 8 close encounters of the Cassini spacecraft with Titan. Though there is an overall ion depletion well beyond the terminator, the ion content on Titan's nightside is still appreciable, with a density plateau of ~700 cm-3 below ~1,300 km. Such a plateau is associated with the combination of distinct diurnal variations of light and heavy ions. Light ions (e.g. CH5+, HCNH+, C2H5+) show strong diurnal variation, with clear bite-outs in their nightside distributions. In contrast, heavy ions (e.g. c-C3H3+, C2H3CNH+, C6H7+) present modest diurnal variation, with significant densities observed on the nightside. We propose that the distinctions between light and heavy ions are associated with their different chemical loss pathways, with the former primarily through "fast" ion-neutral chemistry and the latter through "slow" electron dissociative recombination. The INMS data suggest day-to-night transport as an important source of ions on Titan's nightside, to be distinguished from the conventional scenario of auroral ionization by magnetospheric particles as the only ionizing source on the nightside. This is supported by the strong correlation between the observed night-to-day ion density ratios and the associated ion lifetimes. We construct a time-dependent ion chemistry model to investigate the effects of day-to-night transport on the ionospheric structures of Titan. The predicted diurnal variation has similar general characteristics to those observed, with some apparent discrepancies which could be reconciled by imposing fast horizontal thermal winds in Titan's upper atmosphere.

  7. Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1968-07-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.

  8. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  9. Variation in ebmental quantification by X-ray fluorescence analysis in crystalline materials when applying pressure in sample preparation; Variacion de la cuantificacion elemental en el analisis por Fluorescencia de rayos X en materiales cristalinos al aplicar presion en la preparacion de muestras

    Energy Technology Data Exchange (ETDEWEB)

    Macias B, L.R.; Garcia C, R.M.; De Ita de la Torre, A.; Chavez R, A. [Instituto Nacional de Investigaciones Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico)

    2000-07-01

    In this work making use of the diffraction and fluorescence techniques its were determined the presence of elements in a known compound ZrSiO{sub 4} under different pressure conditions. At preparing the samples it were applied different pressures from 1600 until 350 k N/m{sup 2} and it is detected the apparent variations in concentration in the Zr and Si elements. (Author)

  10. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  11. Pesticide residues in individual versus composite samples of apples after fine or coarse spray quality application

    DEFF Research Database (Denmark)

    Poulsen, Mette E.; Wenneker, Marcel; Withagen, Jacques

    2012-01-01

    . None of the results for the pesticides residues measured in individual apples exceeded the EU Maximum Residue Levels (MRLs). However, there was a large variation in the residues levels in the apples, with levels from 0.01 to 1.4 mg kg−1 for captan, the pesticide with the highest variation, and from 0.......01 to 0.2 mg kg−1 for pyraclostrobin, the pesticide with the lowest variation. Residues of fenoxycarb and indoxacarb were only found in a few apples, probably due to the early application time of these two compounds. The evaluation of the effect of spray quality did not show any major difference between......In this study, field trials on fine and coarse spray quality application of pesticides on apples were performed. The main objectives were to study the variation of pesticide residue levels in individual fruits versus composite samples, and the effect of standard fine spray quality application...

  12. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  13. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  14. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  15. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  16. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  17. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  18. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  19. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  20. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  1. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  2. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  3. Understanding the Role of Reservoir Size on Probable Maximum Precipitation

    Science.gov (United States)

    Woldemichael, A. T.; Hossain, F.

    2011-12-01

    This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the

  4. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  5. Maximum a posteriori covariance estimation using a power inverse wishart prior

    DEFF Research Database (Denmark)

    Nielsen, Søren Feodor; Sporring, Jon

    2012-01-01

    The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...

  6. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  7. Biological variation of cystatin C

    DEFF Research Database (Denmark)

    Reinhard, Mark; Erlandsen, Erland; Randers, Else

    2009-01-01

    Introduction: Cystatin C has been investigated as a marker of the glomerular filtration rate. However, previous studies have reported conflicting results concerning the biological variation of cystatin C. The aim of the present study was to evaluate the biological variation of cystatin C...... in comparison to creatinine. Methods: Eight weekly morning blood samples were taken from twenty healthy volunteers (13 females, 7 males) aged 25-61 years. Mean creatinine clearance was 99.7 ml/min/1.73 m2 (range 61.8-139.5) and mean body mass index 23.9 kg/m2 (range 20.3-28.7). A total of 155 samples were...

  8. Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.

    Directory of Open Access Journals (Sweden)

    Yahya Karimipanah

    Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally

  9. Optimal operating conditions for maximum biogas production in anaerobic bioreactors

    International Nuclear Information System (INIS)

    Balmant, W.; Oliveira, B.H.; Mitchell, D.A.; Vargas, J.V.C.; Ordonez, J.C.

    2014-01-01

    The objective of this paper is to demonstrate the existence of optimal residence time and substrate inlet mass flow rate for maximum methane production through numerical simulations performed with a general transient mathematical model of an anaerobic biodigester introduced in this study. It is herein suggested a simplified model with only the most important reaction steps which are carried out by a single type of microorganisms following Monod kinetics. The mathematical model was developed for a well mixed reactor (CSTR – Continuous Stirred-Tank Reactor), considering three main reaction steps: acidogenesis, with a μ max of 8.64 day −1 and a K S of 250 mg/L, acetogenesis, with a μ max of 2.64 day −1 and a K S of 32 mg/L, and methanogenesis, with a μ max of 1.392 day −1 and a K S of 100 mg/L. The yield coefficients were 0.1-g-dry-cells/g-pollymeric compound for acidogenesis, 0.1-g-dry-cells/g-propionic acid and 0.1-g-dry-cells/g-butyric acid for acetogenesis and 0.1 g-dry-cells/g-acetic acid for methanogenesis. The model describes both the transient and the steady-state regime for several different biodigester design and operating conditions. After model experimental validation, a parametric analysis was performed. It was found that biogas production is strongly dependent on the input polymeric substrate and fermentable monomer concentrations, but fairly independent of the input propionic, acetic and butyric acid concentrations. An optimisation study was then conducted and optimal residence time and substrate inlet mass flow rate were found for maximum methane production. The optima found were very sharp, showing a sudden drop of methane mass flow rate variation from the observed maximum to zero, within a 20% range around the optimal operating parameters, which stresses the importance of their identification, no matter how complex the actual bioreactor design may be. The model is therefore expected to be a useful tool for simulation, design, control and

  10. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  11. Underwater Sediment Sampling Research

    Science.gov (United States)

    2017-01-01

    impacted sediments was found to be directly related to the concentration of crude oil detected in the sediment pore waters . Applying this mathematical...Kurt.A.Hansen@uscg.mil. 16. Abstract (MAXIMUM 200 WORDS ) The USCG R&D Center sought to develop a bench top system to determine the amount of total...scattered. The approach here is to sample the interstitial water between the grains of sand and attempt to determine the amount of oil in and on

  12. Distribution of phytoplankton groups within the deep chlorophyll maximum

    KAUST Repository

    Latasa, Mikel

    2016-11-01

    The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.

  13. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  14. Impact of maximum TF magnetic field on performance and cost of an advanced physics tokamak

    International Nuclear Information System (INIS)

    Reid, R.L.

    1983-01-01

    Parametric studies were conducted using the Fusion Engineering Design Center (FEDC) Tokamak Systems Code to investigate the impact of variation in the maximum value of the field at the toroidal field (TF) coils on the performance and cost of a low q/sub psi/, quasi-steady-state tokamak. Marginal ignition, inductive current startup plus 100 s of inductive burn, and a constant value of epsilon (inverse aspect ratio) times beta poloidal were global conditions imposed on this study. A maximum TF field of approximately 10 T was found to be appropriate for this device

  15. Anomalous maximum and minimum for the dissociation of a geminate pair in energetically disordered media

    Science.gov (United States)

    Govatski, J. A.; da Luz, M. G. E.; Koehler, M.

    2015-01-01

    We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.

  16. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  17. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  18. Maximum-performance fiber-optic irradiation with nonimaging designs.

    Science.gov (United States)

    Fang, Y; Feuermann, D; Gordon, J M

    1997-10-01

    A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.

  19. A silicon pad shower maximum detector for a Shashlik calorimeter

    International Nuclear Information System (INIS)

    Alvsvaag, S.J.; Maeland, O.A.; Klovning, A.

    1995-01-01

    The new luminosity monitor of the DELPHI detector, STIC (Small angle TIle Calorimeter), was built using a Shashlik technique. This technique does not provide longitudinal sampling of the showers, which limits the measurement of the direction of the incident particles and the e-π separation. For these reasons STIC was equipped with a Silicon Pad Shower Maximum Detector (SPSMD). In order to match the silicon detectors to the Shashlick read out by wavelength shifter (WLS) fibers, the silicon wafers had to be drilled with a precision better than 10μm without damaging the active area of the detectors. This paper describes the SPSMD with emphasis on the fabrication techniques and on the components used. Some preliminary results of the detector performance from data taken with a 45GeV electron beam at CERN are presented. (orig.)

  20. Exact sampling from conditional Boolean models with applications to maximum likelihood inference

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Zwet, van E.W.

    2001-01-01

    We are interested in estimating the intensity parameter of a Boolean model of discs (the bombing model) from a single realization. To do so, we derive the conditional distribution of the points (germs) of the underlying Poisson process. We demonstrate how to apply coupling from the past to generate

  1. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  2. Analysis of monazite samples

    International Nuclear Information System (INIS)

    Kartiwa Sumadi; Yayah Rohayati

    1996-01-01

    The 'monazit' analytical program has been set up for routine work of Rare Earth Elements analysis in the monazite and xenotime minerals samples. Total relative error of the analysis is very low, less than 2.50%, and the reproducibility of counting statistic and stability of the instrument were very excellent. The precision and accuracy of the analytical program are very good with the maximum percentage relative are 5.22% and 1.61%, respectively. The mineral compositions of the 30 monazite samples have been also calculated using their chemical constituents, and the results were compared to the grain counting microscopic analysis

  3. Variability and reliability of POP concentrations in multiple breast milk samples collected from the same mothers.

    Science.gov (United States)

    Kakimoto, Risa; Ichiba, Masayoshi; Matsumoto, Akiko; Nakai, Kunihiko; Tatsuta, Nozomi; Iwai-Shimada, Miyuki; Ishiyama, Momoko; Ryuda, Noriko; Someya, Takashi; Tokumoto, Ieyasu; Ueno, Daisuke

    2018-01-13

    Risk assessment of infant using a realistic persistent organic pollutant (POP) exposure through breast milk is essential to devise future regulation of POPs. However, recent investigations have demonstrated that POP levels in breast milk collected from the same mother showed a wide range of variation daily and monthly. To estimate the appropriate sample size of breast milk from the same mother to obtain reliable POP concentrations, breast milk samples were collected from five mothers living in Japan from 2006 to 2012. Milk samples from each mother were collected 3 to 6 times a day through 3 to 7 days consecutively. Food samples as the duplicated method were collected from two mothers during the period of breast milk sample collection. Those were employed for POP (PCBs, DDTs, chlordanes, and HCB) analysis. PCB concentrations detected in breast milk samples showed a wide range of variation which was maximum 63 and 60% of relative standard deviation (RSD) in lipid and wet weight basis, respectively. The time course trend of those variations among the mothers did not show any typical pattern. A larger amount of PCB intake through food seemed to affect 10 h after those concentrations in breast milk in lipid weight basis. Intraclass correlation coefficient (ICC) analyses indicated that the appropriate sample size for good reproducibility of POP concentrations in breast milk required at least two samples for lipid and wet weight basis.

  4. Comparative study of maximum isometric grip strength in different sports

    Directory of Open Access Journals (Sweden)

    Noé Gomes Borges Junior

    2009-06-01

    Full Text Available The objective of this study was to compare maximum isometric grip strength (Fmaxbetween different sports and between the dominant (FmaxD and non-dominant (FmaxND hand. Twenty-nine male aikido (AI, jiujitsu (JJ, judo (JU and rowing (RO athletes and 21non-athletes (NA participated in the study. The hand strength test consisted of maintainingmaximum isometric grip strength for 10 seconds using a hand dynamometer. The position of the subjects was that suggested by the American Society of Hand Therapy. Factorial 2X5 ANOVA with Bonferroni correction, followed by a paired t test and Tukey test, was used for statistical analysis. The highest Fmax values were observed for the JJ group when using the dominant hand,followed by the JU, RO, AI and NA groups. Variation in Fmax could be attributed to handdominance (30.9%, sports modality (39.9% and the interaction between hand dominance andsport (21.3%. The present results demonstrated significant differences in Fmax between the JJ and AI groups and between the JJ and NA groups for both the dominant and non-dominant hand. Significant differences in Fmax between the dominant and non-dominant hand were only observed in the AI and NA groups. The results indicate that Fmax can be used for comparisonbetween different sports modalities, and to identify differences between the dominant and nondominanthand. Studies involving a larger number of subjects will permit the identification of differences between other modalities.

  5. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  6. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  7. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  8. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  9. Crystallite size variation of TiO{sub 2} samples depending time heat treatment; Variacao do tamanho de cristalito de amostras de TiO{sub 2} em funcao do tempo de tratamento termico

    Energy Technology Data Exchange (ETDEWEB)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A., E-mail: amandagmgalante@gmail.com [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil). Departamento de Fisica e Quimica; Spada, E.R. [Universidade de Sao Paulo (USP), Ilha Solteira, SP (Brazil). Instituto de Fisica

    2016-07-01

    Titanium dioxide (TiO{sub 2}) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO{sub 2} powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  10. Diural TSH variations in hypothyroidism.

    Science.gov (United States)

    Weeke, J; Laurberg, P

    1976-07-01

    There is a circadian variation in serum TSH in euthyroid subjects. A similar diurnal variation has been demonstrated in patients with hypothyroidism. In the present study the 24-hour pattern of serum TSH was investigated in eight patients with hypothyroidism of varying severity and in five hypothyroid patients treated with thyroxine (T4). There was a circadian variation in serum TSH in patients with hypothyroidism of moderate degree, and in patients treated for severe hypothyrodism with thyroxine. The pattern was similar to that found in normal subjects, i.e., low TSH levels in the daytime and higher levels at night. In severely hypothyroid patients, no diurnal variation in serum TSH was observed. A practical consequence is that blood samples for TSH measurements in patients with moderately elevated TSH levels are best taken after 1100 h, when the low day levels are reached.

  11. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  12. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  13. Solar cycle variations in IMF intensity

    International Nuclear Information System (INIS)

    King, J.H.

    1979-01-01

    Annual averages of logarithms of hourly interplanetary magnetic field (IMF) intensities, obtained from geocentric spacecraft between November 1963 and December 1977, reveal the following solar cycle variation. For 2--3 years at each solar minimum period, the IMF intensity is depressed by 10--15% relative to its mean value realized during a broad 9-year period contered at solar maximum. No systematic variations occur during this 9-year period. The solar minimum decrease, although small in relation to variations in some other solar wind parameters, is both statistically and physically significant

  14. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  15. Seasonal variation in heavy metal concentration in mangrove foliage

    Digital Repository Service at National Institute of Oceanography (India)

    Untawale, A.G.; Wafar, S.; Bhosle, N.B.

    Seasonal variation in the concentration of some heavy metals in the leaves of seven species of mangrove vegetation from Goa, revealed that maximum concentration of iron and manganese occurs during the monsoon season without any significant toxic...

  16. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  17. Maximum entropy models of ecosystem functioning

    International Nuclear Information System (INIS)

    Bertram, Jason

    2014-01-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example

  18. Maximum entropy models of ecosystem functioning

    Energy Technology Data Exchange (ETDEWEB)

    Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  19. Sample-to-SNP kit: a reliable, easy and fast tool for the detection of HFE p.H63D and p.C282Y variations associated to hereditary hemochromatosis.

    Science.gov (United States)

    Nielsen, Peter B; Petersen, Maja S; Ystaas, Viviana; Andersen, Rolf V; Hansen, Karin M; Blaabjerg, Vibeke; Refstrup, Mette

    2012-10-01

    Classical hereditary hemochromatosis involves the HFE-gene and diagnostic analysis of the DNA variants HFE p.C282Y (c.845G>A; rs1800562) and HFE p.H63D (c.187C>G; rs1799945). The affected protein alters the iron homeostasis resulting in iron overload in various tissues. The aim of this study was to validate the TaqMan-based Sample-to-SNP protocol for the analysis of the HFE-p.C282Y and p.H63D variants with regard to accuracy, usefulness and reproducibility compared to an existing SNP protocol. The Sample-to-SNP protocol uses an approach where the DNA template is made accessible from a cell lysate followed by TaqMan analysis. Besides the HFE-SNPs other eight SNPs were used as well. These SNPs were: Coagulation factor II-gene F2 c.20210G>A, Coagulation factor V-gene F5 p.R506Q (c.1517G>A; rs121917732), Mitochondria SNP: mt7028 G>A, Mitochondria SNP: mt12308 A>G, Proprotein convertase subtilisin/kexin type 9-gene PCSK9 p.R46L (c.137G>T), Plutathione S-transferase pi 1-gene GSTP1 p.I105V (c313A>G; rs1695), LXR g.-171 A>G, ZNF202 g.-118 G>T. In conclusion the Sample-to-SNP kit proved to be an accurate, reliable, robust, easy to use and rapid TaqMan-based SNP detection protocol, which could be quickly implemented in a routine diagnostic or research facility. Copyright © 2012. Published by Elsevier B.V.

  20. Maximum Langmuir Fields in Planetary Foreshocks Determined from the Electrostatic Decay Threshold

    Science.gov (United States)

    Robinson, P. A.; Cairns, Iver H.

    1995-01-01

    Maximum electric fields of Langmuir waves at planetary foreshocks are estimated from the threshold for electrostatic decay, assuming it saturates beam driven growth, and incorporating heliospheric variation of plasma density and temperature. Comparisons with spacecraft observations yields good quantitative agreement. Observations in type 3 radio sources are also in accord with this interpretation. A single mechanism can thus account for the highest fields of beam driven waves in both contexts.

  1. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  2. Intraindividual variation in levels of serum testosterone and other reproductive and adrenal hormones in men.

    Science.gov (United States)

    Brambilla, Donald J; O'Donnell, Amy B; Matsumoto, Alvin M; McKinlay, John B

    2007-12-01

    Estimates of intraindividual variation in hormone levels provide the basis for interpreting hormone measurements clinically and for developing eligibility criteria for trials of hormone replacement therapy. However, reliable systematic estimates of such variation are lacking. To estimate intraindividual variation of serum total, free and bioavailable testosterone (T), dihydrotestosterone (DHT), SHBG, LH, dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulphate (DHEAS), oestrone, oestradiol and cortisol, and the contributions of biological and assay variation to the total. Paired blood samples were obtained 1-3 days apart at entry and again 3 months and 6 months later (maximum six samples per subject). Each sample consisted of a pool of equal aliquots of two blood draws 20 min apart. Men aged 30-79 years were randomly selected from the respondents to the Boston Area Community Health Survey, a study of the health of the general population of Boston, MA, USA. Analysis was based on 132 men, including 121 who completed all six visits, 8 who completed the first two visits and 3 who completed the first four visits. Day-to-day and 3-month (long-term) intraindividual standard deviations, after transforming measurements to logarithms to eliminate the contribution of hormone level to intraindividual variation. Biological variation generally accounted for more of total intraindividual variation than did assay variation. Day-to-day biological variation accounted for more of the total than did long-term biological variation. Short-term variability was greater in hormones with pulsatile secretion (e.g. LH) than those that exhibit less ultradian variation. Depending on the hormone, the intraindividual standard deviations imply that a clinician can expect to see a difference exceeding 18-28% about half the time when two measurements are made on a subject. The difference will exceed 27-54% about a quarter of the time. Given the level of intraindividual variability in hormone

  3. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  4. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  5. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  6. Sampling procedure in a willow plantation for estimation of moisture content

    DEFF Research Database (Denmark)

    Nielsen, Henrik Kofoed; Lærke, Poul Erik; Liu, Na

    2015-01-01

    Heating value and fuel quality of wood is closely connected to moisture content. In this work the variation of moisture content (MC) of short rotation coppice (SRC) willow shoots is described for five clones during one harvesting season. Subsequently an appropriate sampling procedure minimising...... labour costs and sampling uncertainty is proposed, where the MC of a single stem section with the length of 10–50 cm corresponds to the mean shoot moisture content (MSMC) with a bias of maximum 11 g kg−1. This bias can be reduced by selecting the stem section according to the particular clone...

  7. The ultraviolet variations of iota Cas

    Science.gov (United States)

    Molnar, M. R.; Mallama, A. D.; Soskey, D. G.; Holm, A. V.

    1976-01-01

    The Ap variable star iota Cas was observed with the photometers on OAO-2 covering the spectral range 1430-4250 A. The ultraviolet light curves show a double wave with primary minimum and maximum at phase ? 0.00 and 0.35, respectively. Secondary minimum light is at phase ? 0.65 with secondary maximum at phase ? 0.85. The light curves longward of 3150 A vary in opposition to those shortward of this 'null region'. Ground-based coude spectra show that the Fe II and Cr II line strengths have a double-wave variation such that maximum strength occurs at minimum ultraviolet light. We suggest that the strong ultraviolet opacities due to photoionization and line blanketing by these metals may cause the observed photometric variations. We have also constructed an oblique-rotator model which shows iron and chromium lying in a great circle band rather than in circular spots.

  8. Conditions for maximum isolation of stable condensate during separation in gas-condensate systems

    Energy Technology Data Exchange (ETDEWEB)

    Trivus, N.A.; Belkina, N.A.

    1969-02-01

    A thermodynamic analysis is made of the gas-liquid separation process in order to determine the relationship between conditions of maximum stable condensate separation and physico-chemical nature and composition of condensate. The analysis was made by considering the multicomponent gas-condensate fluid produced from Zyrya field as a ternary system, composed of methane, an intermediate component (propane and butane) and a heavy residue, C/sub 6+/. Composition of 5 ternary systems was calculated for a wide variation in separator conditions. At each separator pressure there is maximum condensate production at a certain temperature. This occurs because solubility of condensate components changes with temperature. Results of all calculations are shown graphically. The graphs show conditions of maximum stable condensate separation.

  9. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  10. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....

  11. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  12. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  13. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  14. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  15. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  16. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  17. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  18. Environmental sampling

    International Nuclear Information System (INIS)

    Puckett, J.M.

    1998-01-01

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation

  19. Variational principles for locally variational forms

    International Nuclear Information System (INIS)

    Brajercik, J.; Krupka, D.

    2005-01-01

    We present the theory of higher order local variational principles in fibered manifolds, in which the fundamental global concept is a locally variational dynamical form. Any two Lepage forms, defining a local variational principle for this form, differ on intersection of their domains, by a variationally trivial form. In this sense, but in a different geometric setting, the local variational principles satisfy analogous properties as the variational functionals of the Chern-Simons type. The resulting theory of extremals and symmetries extends the first order theories of the Lagrange-Souriau form, presented by Grigore and Popp, and closed equivalents of the first order Euler-Lagrange forms of Hakova and Krupkova. Conceptually, our approach differs from Prieto, who uses the Poincare-Cartan forms, which do not have higher order global analogues

  20. The Influence of Creatine Monohydrate on Strength and Endurance After Doing Physical Exercise With Maximum Intensity

    Directory of Open Access Journals (Sweden)

    Asrofi Shicas Nabawi

    2017-11-01

    Full Text Available The purpose of this study was: (1 to analyze the effect of creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (2 to analyze the effect of non creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (3 to analyze the results of the difference by administering creatine and non creatine on strength and endurance after exercise with maximum intensity. This type of research used in this research was quantitative with quasi experimental research methods. The design of this study was using pretest and posttest control group design, and data analysis was using a paired sample t-test. The process of data collection was done with the test leg muscle strength using a strength test with back and leg dynamometer, sit ups test with 1 minute sit ups, push ups test with push ups and 30 seconds with a VO2max test cosmed quart CPET during the pretest and posttest. Furthermore, the data were analyzed using SPSS 22.0 series. The results showed: (1 There was the influence of creatine administration against the strength after doing exercise with maximum intensity; (2 There was the influence of creatine administration against the group endurance after doing exercise with maximum intensity; (3 There was the influence of non creatine against the force after exercise maximum intensity; (4 There was the influence of non creatine against the group after endurance exercise maximum intensity; (5 The significant difference with the provision of non creatine and creatine from creatine group difference delta at higher against the increased strength and endurance after exercise maximum intensity. Based on the above analysis, it can be concluded that the increased strength and durability for each of the groups after being given a workout.

  1. Micro-organism distribution sampling for bioassays

    Science.gov (United States)

    Nelson, B. A.

    1975-01-01

    Purpose of sampling distribution is to characterize sample-to-sample variation so statistical tests may be applied, to estimate error due to sampling (confidence limits) and to evaluate observed differences between samples. Distribution could be used for bioassays taken in hospitals, breweries, food-processing plants, and pharmaceutical plants.

  2. Variation and Mathematics Pedagogy

    Science.gov (United States)

    Leung, Allen

    2012-01-01

    This discussion paper put forwards variation as a theme to structure mathematical experience and mathematics pedagogy. Patterns of variation from Marton's Theory of Variation are understood and developed as types of variation interaction that enhance mathematical understanding. An idea of a discernment unit comprising mutually supporting variation…

  3. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  4. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    Science.gov (United States)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  5. Calculus of variations

    CERN Document Server

    Elsgolc, L E; Stark, M

    1961-01-01

    Calculus of Variations aims to provide an understanding of the basic notions and standard methods of the calculus of variations, including the direct methods of solution of the variational problems. The wide variety of applications of variational methods to different fields of mechanics and technology has made it essential for engineers to learn the fundamentals of the calculus of variations. The book begins with a discussion of the method of variation in problems with fixed boundaries. Subsequent chapters cover variational problems with movable boundaries and some other problems; sufficiency

  6. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  7. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  8. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  9. Comparative study of maximum isometric grip strength in different sports

    Directory of Open Access Journals (Sweden)

    Noé Gomes Borges Junior

    2009-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2009v11n3p292   The objective of this study was to compare maximum isometric grip strength (Fmaxbetween different sports and between the dominant (FmaxD and non-dominant (FmaxND hand. Twenty-nine male aikido (AI, jiujitsu (JJ, judo (JU and rowing (RO athletes and 21non-athletes (NA participated in the study. The hand strength test consisted of maintainingmaximum isometric grip strength for 10 seconds using a hand dynamometer. The position of the subjects was that suggested by the American Society of Hand Therapy. Factorial 2X5 ANOVA with Bonferroni correction, followed by a paired t test and Tukey test, was used for statistical analysis. The highest Fmax values were observed for the JJ group when using the dominant hand,followed by the JU, RO, AI and NA groups. Variation in Fmax could be attributed to handdominance (30.9%, sports modality (39.9% and the interaction between hand dominance andsport (21.3%. The present results demonstrated significant differences in Fmax between the JJ and AI groups and between the JJ and NA groups for both the dominant and non-dominant hand. Significant differences in Fmax between the dominant and non-dominant hand were only observed in the AI and NA groups. The results indicate that Fmax can be used for comparisonbetween different sports modalities, and to identify differences between the dominant and nondominanthand. Studies involving a larger number of subjects will permit the identification of differences between other modalities.

  10. Variation in the annual unsatisfactory rates of selected pathogens and indicators in ready-to-eat food sampled from the point of sale or service in Wales, United Kingdom.

    Science.gov (United States)

    Meldrum, R J; Garside, J; Mannion, P; Charles, D; Ellis, P

    2012-12-01

    The Welsh Food Microbiological Forum "shopping basket" survey is a long running, structured surveillance program examining ready-to-eat food randomly sampled from the point of sale or service in Wales, United Kingdom. The annual unsatisfactory rates for selected indicators and pathogens for 1998 through 2008 were examined. All the annual unsatisfactory rates for the selected pathogens were <0.5%, and no pattern with the annual rate was observed. There was also no discernible trend observed for the annual rates of Listeria spp. (not moncytogenes), with all rates <0.5%. However, there was a trend observed for Esherichia coli, with a decrease in rate between 1998 and 2003, rapid in the first few years, and then a gradual increase in rate up to 2008. It was concluded that there was no discernible pattern to the annual unsatisfactory rates for Listeria spp. (not monocytogenes), L. monocytogenes, Staphylococcus aureus, and Bacillus cereus, but that a definite trend had been observed for E. coli.

  11. Adjustable focus laser sheet module for generating constant maximum width sheets for use in optical flow diagnostics

    International Nuclear Information System (INIS)

    Hult, J; Mayer, S

    2011-01-01

    A general design of a laser light sheet module with adjustable focus is presented, where the maximum sheet width is preserved over a fixed region. In contrast, conventional focusing designs are associated with a variation in maximum sheet width with focal position. A four lens design is proposed here, where the first three lenses are employed for focusing, and the last for sheet expansion. A maximum sheet width of 1100 µm was maintained over a 50 mm long distance, for focal distances ranging from 75 to 500 mm, when a 532 nm laser beam with a beam quality factor M 2 = 29 was used for illumination

  12. [The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].

    Science.gov (United States)

    Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R

    1996-02-01

    To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.

  13. Study of temporal variation of radon concentrations in public drinking water supplies

    International Nuclear Information System (INIS)

    York, E.L.

    1995-01-01

    The Environmental Protection Agency (EPA) has proposed a Maximum Contaminant Level (MCL) for radon-222 in public drinking water supplies of 300 pCi/L. Proposed monitoring requirements include collecting quarterly grab samples for the first year, then annual samples for the remainder of the compliance cycle provided first year quarterly samples average below the MCL. The focus of this research was to study the temporal variation of groundwater radon concentrations to investigate how reliably one can predict an annual average radon concentration based on the results of grab samples. Using a open-quotes slow-flowclose quotes collection method and liquid scintillation analysis, biweekly water samples were taken from ten public water supply wells in North Carolina (6 month - 11 month sampling periods). Based on study results, temporal variations exist in groundwater radon concentrations. Statistical analysis performed on the data indicates that grab samples taken from each of the ten wells during the study period would exhibit groundwater radon concentrations within 30% of their average radon concentration

  14. Statistics and sampling in transuranic studies

    International Nuclear Information System (INIS)

    Eberhardt, L.L.; Gilbert, R.O.

    1980-01-01

    The existing data on transuranics in the environment exhibit a remarkably high variability from sample to sample (coefficients of variation of 100% or greater). This chapter stresses the necessity of adequate sample size and suggests various ways to increase sampling efficiency. Objectives in sampling are regarded as being of great importance in making decisions as to sampling methodology. Four different classes of sampling methods are described: (1) descriptive sampling, (2) sampling for spatial pattern, (3) analytical sampling, and (4) sampling for modeling. A number of research needs are identified in the various sampling categories along with several problems that appear to be common to two or more such areas

  15. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  16. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  17. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  18. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  19. Size variation in Middle Pleistocene humans.

    Science.gov (United States)

    Arsuaga, J L; Carretero, J M; Lorenzo, C; Gracia, A; Martínez, I; Bermúdez de Castro, J M; Carbonell, E

    1997-08-22

    It has been suggested that European Middle Pleistocene humans, Neandertals, and prehistoric modern humans had a greater sexual dimorphism than modern humans. Analysis of body size variation and cranial capacity variation in the large sample from the Sima de los Huesos site in Spain showed instead that the sexual dimorphism is comparable in Middle Pleistocene and modern populations.

  20. Secular Variation and Physical Characteristics Determination of the HADS Star EH Lib

    Science.gov (United States)

    Pena, J. H.; Villarreal, C.; Pina, D. S.; Renteria, A.; Soni, A., Guillen, J. Calderon, J.

    2017-12-01

    Physical parameters of EH Lib have been determined based on observations carried out in 2015 with photometry. They have also served, along with samples from the years 1969 and 1986, to analyse the frequency content of EH Lib with Fourier Transforms. Recent CCD observations increased the times of maximum with twelve new times which helped us study the secular variation of the period with a method based on the minimization of the standard deviation of the O-C residuals. It is concluded that there may be a long-term period change.

  1. Microenvironmental variation in preassay rearing conditions can ...

    Indian Academy of Sciences (India)

    alternatively in the presence of some random environmen- tal noise affecting the ... variation leading to a systematic increase or decrease in the fecundity of all pairs of flies that ... can potentially arise due to nonrandom sampling across the.

  2. Biogeochemistry of the MAximum TURbidity Zone of Estuaries (MATURE): some conclusions

    NARCIS (Netherlands)

    Herman, P.M.J.; Heip, C.H.R.

    1999-01-01

    In this paper, we give a short overview of the activities and main results of the MAximum TURbidity Zone of Estuaries (MATURE) project. Three estuaries (Elbe, Schelde and Gironde) have been sampled intensively during a joint 1-week campaign in both 1993 and 1994. We introduce the publicly available

  3. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  4. Dependence of US hurricane economic loss on maximum wind speed and storm size

    International Nuclear Information System (INIS)

    Zhai, Alice R; Jiang, Jonathan H

    2014-01-01

    Many empirical hurricane economic loss models consider only wind speed and neglect storm size. These models may be inadequate in accurately predicting the losses of super-sized storms, such as Hurricane Sandy in 2012. In this study, we examined the dependences of normalized US hurricane loss on both wind speed and storm size for 73 tropical cyclones that made landfall in the US from 1988 through 2012. A multi-variate least squares regression is used to construct a hurricane loss model using both wind speed and size as predictors. Using maximum wind speed and size together captures more variance of losses than using wind speed or size alone. It is found that normalized hurricane loss (L) approximately follows a power law relation with maximum wind speed (V max ) and size (R), L = 10 c V max a R b , with c determining an overall scaling factor and the exponents a and b generally ranging between 4–12 and 2–4 respectively. Both a and b tend to increase with stronger wind speed. Hurricane Sandy’s size was about three times of the average size of all hurricanes analyzed. Based on the bi-variate regression model that explains the most variance for hurricanes, Hurricane Sandy’s loss would be approximately 20 times smaller if its size were of the average size with maximum wind speed unchanged. It is important to revise conventional empirical hurricane loss models that are only dependent on maximum wind speed to include both maximum wind speed and size as predictors. (letters)

  5. Calculus of variations

    CERN Document Server

    Elsgolc, Lev D

    2007-01-01

    This concise text offers both professionals and students an introduction to the fundamentals and standard methods of the calculus of variations. In addition to surveys of problems with fixed and movable boundaries, it explores highly practical direct methods for the solution of variational problems.Topics include the method of variation in problems with fixed boundaries; variational problems with movable boundaries and other problems; sufficiency conditions for an extremum; variational problems of constrained extrema; and direct methods of solving variational problems. Each chapter features nu

  6. Installation of the MAXIMUM microscope at the ALS

    International Nuclear Information System (INIS)

    Ng, W.; Perera, R.C.C.; Underwood, J.H.; Singh, S.; Solak, H.; Cerrina, F.

    1995-10-01

    The MAXIMUM scanning x-ray microscope, developed at the Synchrotron Radiation Center (SRC) at the University of Wisconsin, Madison was implemented on the Advanced Light Source in August of 1995. The microscope's initial operation at SRC successfully demonstrated the use of multilayer coated Schwarzschild objective for focusing 130 eV x-rays to a spot size of better than 0.1 micron with an electron energy resolution of 250meV. The performance of the microscope was severely limited, because of the relatively low brightness of SRC, which limits the available flux at the focus of the microscope. The high brightness of the ALS is expected to increase the usable flux at the sample by a factor of 1,000. The authors will report on the installation of the microscope on bending magnet beamline 6.3.2 at the ALS and the initial measurement of optical performance on the new source, and preliminary experiments with surface chemistry of HF etched Si will be described

  7. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  8. Skin dose variation: influence of energy

    International Nuclear Information System (INIS)

    Cheung, T.; Yu, P.K.N.; Butson, M.J.; Cancer Services, Wollongong, NSW

    2004-01-01

    Full text: This research aimed to quantitatively evaluate the differences in percentage dose of maximum for 6MV and 18MV x-ray beams within the first lcm of interactions. Thus provide quantitative information regarding the basal, dermal and subcutaneous dose differences achievable with these two types of high-energy x-ray beams. Percentage dose of maximum build up curves are measured for most clinical field sizes using 6MV and 18MV x-ray beams. Calculations are performed to produce quantitative results highlighting the percentage dose of maximum differences delivered to various depths within the skin and subcutaneous tissue region by these two beams Results have shown that basal cell layer doses are not significantly different for 6MV and 18Mv x-ray beams At depths beyond the surface and basal cell layer there is a measurable and significant difference in delivered dose. This variation increases to 20% of maximum and 22% of maximum at Imm and 1cm depths respectively. The percentage variations are larger for smaller field sizes where the photon in phantom component of the delivered dose is the most significant contributor to dose By producing graphs or tables of % dose differences in the build up region we can provide quantitative information to the oncologist for consideration (if skin and subcutaneous tissue doses are of importance) during the beam energy selection process for treatment. Copyright (2004) Australasian College of Physical Scientists and Engineers in Medicine

  9. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  10. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  11. Seasonal Variation of Cistus ladanifer L. Diterpenes

    Directory of Open Access Journals (Sweden)

    Juan Carlos Alías

    2012-07-01

    Full Text Available The exudate of Cistus ladanifer L. consists mainly of two families of secondary metabolites: flavonoids and diterpenes. The amount of flavonoids present in the leaves has a marked seasonal variation, being maximum in summer and minimum in winter. In the present study, we demonstrate that the amount of diterpenes varies seasonally, but with a different pattern: maximum concentration in winter and minimum in spring-summer. The experiments under controlled conditions have shown that temperature influences diterpene production, and in particular, low temperatures. Given this pattern, the functions that these compounds perform in C. ladanifer are probably different.

  12. Comparsion of maximum viscosity and viscometric method for identification of irradiated sweet potato starch

    International Nuclear Information System (INIS)

    Yi, Sang Duk; Yang, Jae Seung

    2000-01-01

    A study was carried out to compare viscosity and maximum viscosity methods for the detection of irradiated sweet potato starch. The viscosity of all samples decreased by increasing stirring speeds and irradiation doses. This trend was similar for maximum viscosity. Regression coefficients and expressions of viscosity and maximum viscosity with increasing irradiation dose were 0.9823 (y=335.02e -0. 3 366x ) at 120 rpm and 0.9939 (y =-42.544x+730.26). This trend in viscosity was similar for all stirring speeds. Parameter A, B and C values showed a dose dependent relation and were a better parameter for detecting irradiation treatment than maximum viscosity and the viscosity value it self. These results suggest that the detection of irradiated sweet potato starch is possible by both the viscometric and maximum visosity method. Therefore, the authors think that the maximum viscosity method can be proposed as one of the new methods to detect the irradiation treatment for sweet potato starch

  13. Quantum Variational Calculus

    OpenAIRE

    Malinowska , Agnieszka B.; Torres , Delfim

    2014-01-01

    International audience; Introduces readers to the treatment of the calculus of variations with q-differences and Hahn difference operators Provides the reader with the first extended treatment of quantum variational calculus Shows how the techniques described can be applied to economic models as well as other mathematical systems This Brief puts together two subjects, quantum and variational calculi by considering variational problems involving Hahn quantum operators. The main advantage of it...

  14. Bilateral renal artery variation

    OpenAIRE

    Üçerler, Hülya; Üzüm, Yusuf; İkiz, Z. Aslı Aktan

    2014-01-01

    Each kidney is supplied by a single renal artery, although renal artery variations are common. Variations of the renal arteryhave become important with the increasing number of renal transplantations. Numerous studies describe variations in renalartery anatomy. Especially the left renal artery is among the most critical arterial variations, because it is the referred side forresecting the donor kidney. During routine dissection in a formalin fixed male cadaver, we have found a bilateral renal...

  15. Genetics and variation

    Science.gov (United States)

    John R. Jones; Norbert V. DeByle

    1985-01-01

    The broad genotypic variability in quaking aspen (Populus tremuloides Michx.), that results in equally broad phenotypic variability among clones is important to the ecology and management of this species. This chapter considers principles of aspen genetics and variation, variation in aspen over its range, and local variation among clones. For a more...

  16. Studying Variation in Tunes

    NARCIS (Netherlands)

    Janssen, B.; van Kranenburg, P.

    2014-01-01

    Variation in music can be caused by different phenomena: conscious, creative manipulation of musical ideas; but also unconscious variation during music recall. It is the latter phenomenon that we wish to study: variation which occurs in oral transmission, in which a melody is taught without the help

  17. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  18. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Science.gov (United States)

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  19. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  20. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  1. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  2. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  3. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  4. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  5. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...

  6. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  7. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  8. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...

  9. New results on the mid-latitude midnight temperature maximum

    Science.gov (United States)

    Mesquita, Rafael L. A.; Meriwether, John W.; Makela, Jonathan J.; Fisher, Daniel J.; Harding, Brian J.; Sanders, Samuel C.; Tesema, Fasil; Ridley, Aaron J.

    2018-04-01

    Fabry-Perot interferometer (FPI) measurements of thermospheric temperatures and winds show the detection and successful determination of the latitudinal distribution of the midnight temperature maximum (MTM) in the continental mid-eastern United States. These results were obtained through the operation of the five FPI observatories in the North American Thermosphere Ionosphere Observing Network (NATION) located at the Pisgah Astronomic Research Institute (PAR) (35.2° N, 82.8° W), Virginia Tech (VTI) (37.2° N, 80.4° W), Eastern Kentucky University (EKU) (37.8° N, 84.3° W), Urbana-Champaign (UAO) (40.2° N, 88.2° W), and Ann Arbor (ANN) (42.3° N, 83.8° W). A new approach for analyzing the MTM phenomenon is developed, which features the combination of a method of harmonic thermal background removal followed by a 2-D inversion algorithm to generate sequential 2-D temperature residual maps at 30 min intervals. The simultaneous study of the temperature data from these FPI stations represents a novel analysis of the MTM and its large-scale latitudinal and longitudinal structure. The major finding in examining these maps is the frequent detection of a secondary MTM peak occurring during the early evening hours, nearly 4.5 h prior to the timing of the primary MTM peak that generally appears after midnight. The analysis of these observations shows a strong night-to-night variability for this double-peaked MTM structure. A statistical study of the behavior of the MTM events was carried out to determine the extent of this variability with regard to the seasonal and latitudinal dependence. The results show the presence of the MTM peak(s) in 106 out of the 472 determinable nights (when the MTM presence, or lack thereof, can be determined with certainty in the data set) selected for analysis (22 %) out of the total of 846 nights available. The MTM feature is seen to appear slightly more often during the summer (27 %), followed by fall (22 %), winter (20 %), and spring

  10. Geometrical prediction of maximum power point for photovoltaics

    International Nuclear Information System (INIS)

    Kumar, Gaurav; Panchal, Ashish K.

    2014-01-01

    Highlights: • Direct MPP finding by parallelogram constructed from geometry of I–V curve of cell. • Exact values of V and P at MPP obtained by Lagrangian interpolation exploration. • Extensive use of Lagrangian interpolation for implementation of proposed method. • Method programming on C platform with minimum computational burden. - Abstract: It is important to drive solar photovoltaic (PV) system to its utmost capacity using maximum power point (MPP) tracking algorithms. This paper presents a direct MPP prediction method for a PV system considering the geometry of the I–V characteristic of a solar cell and a module. In the first step, known as parallelogram exploration (PGE), the MPP is determined from a parallelogram constructed using the open circuit (OC) and the short circuit (SC) points of the I–V characteristic and Lagrangian interpolation. In the second step, accurate values of voltage and power at the MPP, defined as V mp and P mp respectively, are decided by the Lagrangian interpolation formula, known as the Lagrangian interpolation exploration (LIE). Specifically, this method works with a few (V, I) data points instead most of the MPP algorithms work with (P, V) data points. The performance of the method is examined by several PV technologies including silicon, copper indium gallium selenide (CIGS), copper zinc tin sulphide selenide (CZTSSe), organic, dye sensitized solar cell (DSSC) and organic tandem cells’ data previously reported in literatures. The effectiveness of the method is tested experimentally for a few silicon cells’ I–V characteristics considering variation in the light intensity and the temperature. At last, the method is also employed for a 10 W silicon module tested in the field. To testify the preciseness of the method, an absolute value of the derivative of power (P) with respect to voltage (V) defined as (dP/dV) is evaluated and plotted against V. The method estimates the MPP parameters with high accuracy for any

  11. Delay Variation Model with Two Service Queues

    Directory of Open Access Journals (Sweden)

    Filip Rezac

    2010-01-01

    Full Text Available Delay in VoIP technology is very unpleasant issue and therefore a voice packets prioritization must be ensured. To maintain the high call quality a maximum information delivery time from the sender to the recipient is set to 150 ms. This paper focuses on the design of a mathematical model of end-to-end delay of a VoIP connection, in particular on a delay variation. It describes all partial delay components and mechanisms, their generation, facilities and mathematical formulations. A new approach to the delay variation model is presented and its validation has been done by experimention.

  12. An investigation of rugby scrimmaging posture and individual maximum pushing force.

    Science.gov (United States)

    Wu, Wen-Lan; Chang, Jyh-Jong; Wu, Jia-Hroung; Guo, Lan-Yuen

    2007-02-01

    Although rugby is a popular contact sport and the isokinetic muscle torque assessment has recently found widespread application in the field of sports medicine, little research has examined the factors associated with the performance of game-specific skills directly by using the isokinetic-type rugby scrimmaging machine. This study is designed to (a) measure and observe the differences in the maximum individual pushing forward force produced by scrimmaging in different body postures (3 body heights x 2 foot positions) with a self-developed rugby scrimmaging machine and (b) observe the variations in hip, knee, and ankle angles at different body postures and explore the relationship between these angle values and the individual maximum pushing force. Ten national rugby players were invited to participate in the examination. The experimental equipment included a self-developed rugby scrimmaging machine and a 3-dimensional motion analysis system. Our results showed that the foot positions (parallel and nonparallel foot positions) do not affect the maximum pushing force; however, the maximum pushing force was significantly lower in posture I (36% body height) than in posture II (38%) and posture III (40%). The maximum forward force in posture III (40% body height) was also slightly greater than for the scrum in posture II (38% body height). In addition, it was determined that hip, knee, and ankle angles under parallel feet positioning are factors that are closely negatively related in terms of affecting maximum pushing force in scrimmaging. In cross-feet postures, there was a positive correlation between individual forward force and hip angle of the rear leg. From our results, we can conclude that if the player stands in an appropriate starting position at the early stage of scrimmaging, it will benefit the forward force production.

  13. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  14. Scleroderma prevalence: demographic variations in a population-based sample.

    Science.gov (United States)

    Bernatsky, S; Joseph, L; Pineau, C A; Belisle, P; Hudson, M; Clarke, A E

    2009-03-15

    To estimate the prevalence of systemic sclerosis (SSc) using population-based administrative data, and to assess the sensitivity of case ascertainment approaches. We ascertained SSc cases from Quebec physician billing and hospitalization databases (covering approximately 7.5 million individuals). Three case definition algorithms were compared, and statistical methods accounting for imperfect case ascertainment were used to estimate SSc prevalence and case ascertainment sensitivity. A hierarchical Bayesian latent class regression model that accounted for possible between-test dependence conditional on disease status estimated the effect of patient characteristics on SSc prevalence and the sensitivity of the 3 ascertainment algorithms. Accounting for error inherent in both the billing and the hospitalization data, we estimated SSc prevalence in 2003 at 74.4 cases per 100,000 women (95% credible interval [95% CrI] 69.3-79.7) and 13.3 cases per 100,000 men (95% CrI 11.1-16.1). Prevalence was higher for older individuals, particularly in urban women (161.2 cases per 100,000, 95% CrI 148.6-175.0). Prevalence was lowest in young men (in rural areas, as low as 2.8 cases per 100,000, 95% CrI 1.4-4.8). In general, no single algorithm was very sensitive, with point estimates for sensitivity ranging from 20-73%. We found marked differences in SSc prevalence according to age, sex, and region. In general, no single case ascertainment approach was very sensitive for SSc. Therefore, using data from multiple sources, with adjustment for the imperfect nature of each, is an important strategy in population-based studies of SSc and similar conditions.

  15. Nonuniform sampling by quantiles

    Science.gov (United States)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  16. Comparison of Extremum-Seeking Control Techniques for Maximum Power Point Tracking in Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Chen-Han Wu

    2011-12-01

    Full Text Available Due to Japan’s recent nuclear crisis and petroleum price hikes, the search for renewable energy sources has become an issue of immediate concern. A promising candidate attracting much global attention is solar energy, as it is green and also inexhaustible. A maximum power point tracking (MPPT controller is employed in such a way that the output power provided by a photovoltaic (PV system is boosted to its maximum level. However, in the context of abrupt changes in irradiance, conventional MPPT controller approaches suffer from insufficient robustness against ambient variation, inferior transient response and a loss of output power as a consequence of the long duration required of tracking procedures. Accordingly, in this work the maximum power point tracking is carried out successfully using a sliding mode extremum-seeking control (SMESC method, and the tracking performances of three controllers are compared by simulations, that is, an extremum-seeking controller, a sinusoidal extremum-seeking controller and a sliding mode extremum-seeking controller. Being able to track the maximum power point promptly in the case of an abrupt change in irradiance, the SMESC approach is proven by simulations to be superior in terms of system dynamic and steady state responses, and an excellent robustness along with system stability is demonstrated as well.

  17. An extension of the maximum principle to dimensional systems and its application in nuclear engineering problems

    International Nuclear Information System (INIS)

    Gilai, D.

    1976-01-01

    The Maximum Principle deals with optimization problems of systems, which are governed by ordinary differential equations, and which include constraints on the state and control variables. The development of nuclear engineering confronted the designers of reactors, shielding and other nuclear devices with many requests of optimization and savings and it was straight forward to use the Maximum Principle for solving optimization problems in nuclear engineering, in fact, it was widely used both structural concept design and dynamic control of nuclear systems. The main disadvantage of the Maximum Principle is that it is suitable only for systems which may be described by ordinary differential equations, e.g. one dimensional systems. In the present work, starting from the variational approach, the original Maximum Principle is extended to multidimensional systems, and the principle which has been derived, is of a more general form and is applicable to any system which can be defined by linear partial differential equations of any order. To check out the applicability of the extended principle, two examples are solved: the first in nuclear shield design, where the goal is to construct a shield around a neutron emitting source, using given materials, so that the total dose outside of the shielding boundaries is minimized, the second in material distribution design in the core of a power reactor, so that the power peak is minimised. For the second problem, an iterative method was developed. (B.G.)

  18. Seasonal variations of equatorial spread-F

    Directory of Open Access Journals (Sweden)

    B. V. Krishna Murthy

    Full Text Available The occurrence of spread-F at Trivandrum (8.5°N, 77°E, dip 0.5°N has been investigated on a seasonal basis in sunspot maximum and minimum years in terms of the growth rate of irregularities by the generalized collisional Rayleigh-Taylor (GRT instability mechanism which includes the gravitational and cross-field instability terms. The occurrence statistics of spread-F at Trivandrum have been obtained using quarter hourly ionograms. The nocturnal variations of the growth rate of irregularities by the GRT mechanism have been estimated for different seasons in sunspot maximum and minimum years at Trivandrum using h'F values and vertical drift velocities obtained from ionograms. It is found that the seasonal variation of spread-F occurrence at Trivandrum can, in general, be accounted for on the basis of the GRT mechanism.

  19. Seasonal variations of equatorial spread-F

    Directory of Open Access Journals (Sweden)

    K. S. V. Subbarao

    1994-01-01

    Full Text Available The occurrence of spread-F at Trivandrum (8.5°N, 77°E, dip 0.5°N has been investigated on a seasonal basis in sunspot maximum and minimum years in terms of the growth rate of irregularities by the generalized collisional Rayleigh-Taylor (GRT instability mechanism which includes the gravitational and cross-field instability terms. The occurrence statistics of spread-F at Trivandrum have been obtained using quarter hourly ionograms. The nocturnal variations of the growth rate of irregularities by the GRT mechanism have been estimated for different seasons in sunspot maximum and minimum years at Trivandrum using h'F values and vertical drift velocities obtained from ionograms. It is found that the seasonal variation of spread-F occurrence at Trivandrum can, in general, be accounted for on the basis of the GRT mechanism.

  20. Seasonal variations in 228Ra/226Ra ratio within coastal waters of the Sea of Japan: implications for water circulation patterns in coastal areas

    International Nuclear Information System (INIS)

    Inoue, M.; Tanaka, K.; Watanabe, S.; Kofuji, H.; Yamamoto, M.; Komura, K.

    2006-01-01

    In this study, low-background γ-spectrometry was used to determine the 228 Ra/ 226 Ra ratio of 131 coastal water samples from various environments around Honshu Island, Japan (mainly around Noto Peninsula) at 1-3 month intervals from April 2003 until September 2005. Spatial variation in 228 Ra/ 226 Ra ratios was also assessed by analyzing 34 coastal water samples from five areas within the Sea of Japan during May and June 2004. The 228 Ra/ 226 Ra ratio of coastal water from all sites around Noto Peninsula shows seasonal variation, with minimum values during summer ( 228 Ra/ 226 Ra = 0.7) and maximum values during autumn-winter ( 228 Ra/ 226 Ra = 1.7-2). This seasonal variation is similar to that recorded for coastal water between Tsushima Strait and Noto Peninsula. The measured lateral variation in 228 Ra/ 226 Ra ratios within coastal water between Tsushima Strait and Noto Peninsula is only minor (0.5-0.7; May-June 2004). Coastal waters from two other sites (Pacific shore and Tsugaru Strait, north Honshu) show no clear seasonal variation in 228 Ra/ 226 Ra ratio. These measured variations in 228 Ra/ 226 Ra ratio, especially the temporal variations, have important implications for seasonal changes in patterns of coastal water circulation within the Sea of Japan

  1. Determination of the wind power systems load to achieve operation in the maximum energy area

    Science.gov (United States)

    Chioncel, C. P.; Tirian, G. O.; Spunei, E.; Gillich, N.

    2018-01-01

    This paper analyses the operation of the wind turbine, WT, in the maximum power point, MPP, by linking the load of the Permanent Magnet Synchronous Generator, PMSG, with the wind speed value. The load control methods at wind power systems aiming an optimum performance in terms of energy are based on the fact that the energy captured by the wind turbine significantly depends on the mechanical angular speed of the wind turbine. The presented control method consists in determining the optimal mechanical angular speed, ωOPTIM, using an auxiliary low power wind turbine, WTAUX, operating without load, at maximum angular velocity, ωMAX. The method relies on the fact that the ratio ωOPTIM/ωMAX has a constant value for a given wind turbine and does not depend on the time variation of the wind speed values.

  2. A Modified Levenberg-Marquardt Method for Nonsmooth Equations with Finitely Many Maximum Functions

    Directory of Open Access Journals (Sweden)

    Shou-qiang Du

    2008-01-01

    Full Text Available For solving nonsmooth systems of equations, the Levenberg-Marquardt method and its variants are of particular importance because of their locally fast convergent rates. Finitely many maximum functions systems are very useful in the study of nonlinear complementarity problems, variational inequality problems, Karush-Kuhn-Tucker systems of nonlinear programming problems, and many problems in mechanics and engineering. In this paper, we present a modified Levenberg-Marquardt method for nonsmooth equations with finitely many maximum functions. Under mild assumptions, the present method is shown to be convergent Q-linearly. Some numerical results comparing the proposed method with classical reformulations indicate that the modified Levenberg-Marquardt algorithm works quite well in practice.

  3. Dynamic Optimization of a Polymer Flooding Process Based on Implicit Discrete Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and some inequality constraints as polymer concentration and injection amount limitation. The optimal control model is discretized by full implicit finite-difference method. To cope with the discrete optimal control problem (OCP, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s discrete maximum principle. A modified gradient method with new adjoint construction is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  4. Implementation of Maximum Power Point Tracking (MPPT) Solar Charge Controller using Arduino

    Science.gov (United States)

    Abdelilah, B.; Mouna, A.; KouiderM’Sirdi, N.; El Hossain, A.

    2018-05-01

    the platform Arduino with a number of sensors standard can be used as components of an electronic system for acquiring measures and controls. This paper presents the design of a low-cost and effective solar charge controller. This system includes several elements such as the solar panel converter DC/DC, battery, circuit MPPT using Microcontroller, sensors, and the MPPT algorithm. The MPPT (Maximum Power Point Tracker) algorithm has been implemented using an Arduino Nano with the preferred program. The voltage and current of the Panel are taken where the program implemented will work and using this algorithm that MPP will be reached. This paper provides details on the solar charge control device at the maximum power point. The results include the change of the duty cycle with the change in load and thus mean the variation of the buck converter output voltage and current controlled by the MPPT algorithm.

  5. Three dimensional winds: A maximum cross-correlation application to elastic lidar data

    Energy Technology Data Exchange (ETDEWEB)

    Buttler, William Tillman [Univ. of Texas, Austin, TX (United States)

    1996-05-01

    Maximum cross-correlation techniques have been used with satellite data to estimate winds and sea surface velocities for several years. Los Alamos National Laboratory (LANL) is currently using a variation of the basic maximum cross-correlation technique, coupled with a deterministic application of a vector median filter, to measure transverse winds as a function of range and altitude from incoherent elastic backscatter lidar (light detection and ranging) data taken throughout large volumes within the atmospheric boundary layer. Hourly representations of three-dimensional wind fields, derived from elastic lidar data taken during an air-quality study performed in a region of complex terrain near Sunland Park, New Mexico, are presented and compared with results from an Environmental Protection Agency (EPA) approved laser doppler velocimeter. The wind fields showed persistent large scale eddies as well as general terrain-following winds in the Rio Grande valley.

  6. LASER: A Maximum Likelihood Toolkit for Detecting Temporal Shifts in Diversification Rates From Molecular Phylogenies

    Directory of Open Access Journals (Sweden)

    Daniel L. Rabosky

    2006-01-01

    Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.

  7. Anomalous Capacitance Maximum of the Glassy Carbon-Ionic Liquid Interface through Dilution with Organic Solvents.

    Science.gov (United States)

    Bozym, David J; Uralcan, Betül; Limmer, David T; Pope, Michael A; Szamreta, Nicholas J; Debenedetti, Pablo G; Aksay, Ilhan A

    2015-07-02

    We use electrochemical impedance spectroscopy to measure the effect of diluting a hydrophobic room temperature ionic liquid with miscible organic solvents on the differential capacitance of the glassy carbon-electrolyte interface. We show that the minimum differential capacitance increases with dilution and reaches a maximum value at ionic liquid contents near 5-10 mol% (i.e., ∼1 M). We provide evidence that mixtures with 1,2-dichloroethane, a low-dielectric constant solvent, yield the largest gains in capacitance near the open circuit potential when compared against two traditional solvents, acetonitrile and propylene carbonate. To provide a fundamental basis for these observations, we use a coarse-grained model to relate structural variations at the double layer to the occurrence of the maximum. Our results reveal the potential for the enhancement of double-layer capacitance through dilution.

  8. Optimal control of a double integrator a primer on maximum principle

    CERN Document Server

    Locatelli, Arturo

    2017-01-01

    This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...

  9. Local application of zoledronate for maximum anchorage during space closure.

    Science.gov (United States)

    Ortega, Adam J A J; Campbell, Phillip M; Hinton, Robert; Naidu, Aparna; Buschang, Peter H

    2012-12-01

    Orthodontists have used various compliance-dependent physical means such as headgears and intraoral appliances to prevent anchorage loss. The aim of this study was to determine whether 1 local application of the bisphosphonate zoledronate could be used to prevent anchorage loss during extraction space closure in rats. Thirty rats had their maxillary left first molars extracted and their maxillary left second molars protracted into the extraction space with a 10-g nickel-titanium closing coil for 21 days. Fifteen control rats received a local injection of phosphate-buffered saline solution, and 15 experimental rats received 16 μg of the bisphosphonate zoledronate. Bisphosphonate was also delivered directly into the extraction site and left undisturbed for 5 minutes. Cephalograms and incremental thickness gauges were used to measure tooth movements. Tissues were analyzed by microcomputed tomography and histology. The control group demonstrated significant (P <0.05) tooth movements throughout the 21-day period. They showed significantly greater tooth movements than the experimental group beginning in the second week. The experimental group showed no significant tooth movement after the first week. The microcomputed tomography and histologic observations showed significant bone loss in the extraction sites and around the second molars of the controls. In contrast, the experimental group had bone preservation and bone fill. There was no evidence of bisphosphonate-associated osteonecrosis in any sample. A single small, locally applied dose of zoledronate provided maximum anchorage and prevented significant bone loss. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  10. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  11. SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)

    2015-06-15

    Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.

  12. SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume

    International Nuclear Information System (INIS)

    Gong, Y; Yu, J; Xiao, Y

    2015-01-01

    Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant

  13. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  14. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  15. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  16. Magnification of starting torques of dc motors by maximum power point trackers in photovoltaic systems

    Science.gov (United States)

    Appelbaum, J.; Singer, S.

    1989-01-01

    A calculation of the starting torque ratio of permanent magnet, series, and shunt-excited dc motors powered by solar cell arrays is presented for two cases, i.e., with and without a maximum-power-point tracker (MPPT). Defining motor torque magnification by the ratio of the motor torque with an MPPT to the motor torque without an MPPT, a magnification of 3 for the permanent magnet motor and a magnification of 7 for both the series and shunt motors are obtained. The study also shows that all motor types are less sensitive to solar insolation variation in systems including MPPTs as compared to systems without MPPTs.

  17. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  18. Parameters determining maximum wind velocity in a tropical cyclone

    International Nuclear Information System (INIS)

    Choudhury, A.M.

    1984-09-01

    The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)

  19. Temperature dependence of attitude sensor coalignments on the Solar Maximum Mission (SMM)

    Science.gov (United States)

    Pitone, D. S.; Eudell, A. H.; Patt, F. S.

    1990-01-01

    The temperature correlation of the relative coalignment between the fine-pointing sun sensor and fixed-head star trackers measured on the Solar Maximum Mission (SMM) is analyzed. An overview of the SMM, including mission history and configuration, is given. Possible causes of the misalignment variation are discussed, with focus placed on spacecraft bending due to solar-radiation pressure, electronic or mechanical changes in the sensors, uncertainty in the attitude solutions, and mounting-plate expansion and contraction due to thermal effects. Yaw misalignment variation from the temperature profile is assessed, and suggestions for spacecraft operations are presented, involving methods to incorporate flight measurements of the temperature-versus-alignment function and its variance in operational procedures and the spacecraft structure temperatures in the attitude telemetry record.

  20. Variation in provider vaccine purchase prices and payer reimbursement.

    Science.gov (United States)

    Freed, Gary L; Cowan, Anne E; Gregory, Sashi; Clark, Sarah J

    2009-12-01

    The purpose of this work was to collect data regarding vaccine prices and reimbursements in private practices. Amid reports of physicians losing money on vaccines, there are limited supporting data to show how much private practices are paying for vaccines and how much they are being reimbursed by third-party payers. We conducted a cross-sectional survey of a convenience sample of private practices in 5 states (California, Georgia, Michigan, New York, and Texas) that purchase vaccines for administration to privately insured children/adolescents. Main outcome measures included prices paid to purchase vaccines recommended for children and adolescents and reimbursement from the 3 most common, non-Medicaid payers for vaccine purchase and administration. Detailed price and reimbursement data were provided by 76 practices. There was a considerable difference between the maximum and minimum prices paid by practices, ranging from $4 to more than $30 for specific vaccines. There was also significant variation in insurance reimbursement for vaccine purchase, with maximum and minimum reimbursements for a single vaccine differing from $8 to more than $80. Mean net yield per dose (reimbursement for vaccine purchase minus price paid per dose) varied across vaccines from a low of approximately $3 to more than $24. Reimbursement for the first dose of vaccine administered ranged from $0 to more than $26, with a mean of $16.62. There is a wide range of prices paid by practices for the same vaccine product and in the reimbursement for vaccines and administration fees by payers. This variation highlights the need for individual practices to understand their own costs and reimbursements and to seek opportunities to reduce costs and increase reimbursements.