WorldWideScience

Sample records for maximum sampling frequency

  1. Effect of Training Frequency on Maximum Expiratory Pressure

    Science.gov (United States)

    Anand, Supraja; El-Bashiti, Nour; Sapienza, Christine

    2012-01-01

    Purpose: To determine the effects of expiratory muscle strength training (EMST) frequency on maximum expiratory pressure (MEP). Method: We assigned 12 healthy participants to 2 groups of training frequency (3 days per week and 5 days per week). They completed a 4-week training program on an EMST trainer (Aspire Products, LLC). MEP was the primary…

  2. Gravitational Waves and the Maximum Spin Frequency of Neutron Stars

    NARCIS (Netherlands)

    Patruno, A.; Haskell, B.; D'Angelo, C.

    2012-01-01

    In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient

  3. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    Science.gov (United States)

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  4. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola

    2016-01-01

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected

  5. Geodesic acoustic eigenmode for tokamak equilibrium with maximum of local GAM frequency

    Energy Technology Data Exchange (ETDEWEB)

    Lakhin, V.P. [NRC “Kurchatov Institute”, Moscow (Russian Federation); Sorokina, E.A., E-mail: sorokina.ekaterina@gmail.com [NRC “Kurchatov Institute”, Moscow (Russian Federation); Peoples' Friendship University of Russia, Moscow (Russian Federation)

    2014-01-24

    The geodesic acoustic eigenmode for tokamak equilibrium with the maximum of local GAM frequency is found analytically in the frame of MHD model. The analysis is based on the asymptotic matching technique.

  6. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng

    2016-11-14

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.

  7. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  8. Evaluating Annual Maximum and Partial Duration Series for Estimating Frequency of Small Magnitude Floods

    Directory of Open Access Journals (Sweden)

    Fazlul Karim

    2017-06-01

    Full Text Available Understanding the nature of frequent floods is important for characterising channel morphology, riparian and aquatic habitat, and informing river restoration efforts. This paper presents results from an analysis on frequency estimates of low magnitude floods using the annual maximum and partial series data compared to actual flood series. Five frequency distribution models were fitted to data from 24 gauging stations in the Great Barrier Reef (GBR lagoon catchments in north-eastern Australia. Based on the goodness of fit test, Generalised Extreme Value, Generalised Pareto and Log Pearson Type 3 models were used to estimate flood frequencies across the study region. Results suggest frequency estimates based on a partial series are better, compared to an annual series, for small to medium floods, while both methods produce similar results for large floods. Although both methods converge at a higher recurrence interval, the convergence recurrence interval varies between catchments. Results also suggest frequency estimates vary slightly between two or more partial series, depending on flood threshold, and the differences are large for the catchments that experience less frequent floods. While a partial series produces better frequency estimates, it can underestimate or overestimate the frequency if the flood threshold differs largely compared to bankfull discharge. These results have significant implications in calculating the dependency of floodplain ecosystems on the frequency of flooding and their subsequent management.

  9. The effect of electric field maximum on the Rabi flopping and generated higher frequency spectra

    International Nuclear Information System (INIS)

    Niu Yueping; Cui Ni; Xiang Yang; Li Ruxin; Gong Shangqing; Xu Zhizhan

    2008-01-01

    We investigate the effect of the electric field maximum on the Rabi flopping and the generated higher frequency spectra properties by solving Maxwell-Bloch equations without invoking any standard approximations. It is found that the maximum of the electric field will lead to carrier-wave Rabi flopping (CWRF) through reversion dynamics which will be more evident when the applied field enters the sub-one-cycle regime. Therefore, under the interaction of sub-one-cycle pulses, the Rabi flopping follows the transient electric field tightly through the oscillation and reversion dynamics, which is in contrast to the conventional envelope Rabi flopping. Complete or incomplete population inversion can be realized through the control of the carrier-envelope phase (CEP). Furthermore, the generated higher frequency spectra will be changed from distinct to continuous or irregular with the variation of the CEP. Our results demonstrate that due to the evident maximum behavior of the electric field, pulses with different CEP give rise to different CWRFs, and then different degree of interferences lead to different higher frequency spectral features.

  10. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D; Martins, C.D.A.

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  11. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  12. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano; Basili, Roberto; Meroni, Fabrizio; Musacchio, Gemma; Mai, Paul Martin; Valensise, Gianluca

    2012-01-01

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  13. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano

    2012-03-17

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  14. Measures of maximum magnetic field in 3 GHz radio frequency superconducting cavities; Mesures du gradient accelerateur maximum dans des cavites supraconductrices en regime impulsionnel a 3 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Catherine [Paris-11 Univ., 91 Orsay (France)

    2000-01-19

    Theoretical models have shown that the maximum magnetic field in radio frequency superconducting cavities is the superheating field H{sub sh}. For niobium, H{sub sh} is 25 - 30% higher than the thermodynamical H{sub c} field: H{sub sh} within (240 - 274) mT. However, the maximum magnetic field observed so far is in the range H{sub c,max} = 152 mT for the best 1.3 GHz Nb cavities. This field is lower than the critical field H{sub c1} above which the superconductor breaks up into divided normal and superconducting zones (H{sub c1}{<=}H{sub c}). Thermal instabilities are responsible for this low value. In order to reach H{sub sh} before thermal breakdown, high power short pulses are used. The cavity needs then to be strongly over-coupled. The dedicated test bed has been built from the collaboration between Istituto Nazionale di Fisica Nucleare (INFN) - Sezione di Genoa, and the Service d'Etudes et Realisation d'Accelerateurs (SERA) of Laboratoire de l'Accelerateur Lineaire (LAL). The maximum magnetic field, H{sub rf,max}, measurements on INFN cavities give lower results than the theoretical speculations and are in agreement with previous results. The superheating magnetic fields is linked to the magnetic penetration depth. This superconducting characteristic length can be used to determine the quality of niobium through the ratio between the resistivity measured at 300 K and 4.2 K in the normal conducting state (RRR). Results have been compared to previous ones and agree pretty well. They show that the RRR measured on cavities is superficial and lower than the RRR measured on samples which concerns the volume. (author)

  15. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  16. [Polish guidelines of 2001 for maximum admissible intensities in high frequency EMF versus European Union recommendations].

    Science.gov (United States)

    Aniołczyk, Halina

    2003-01-01

    In 1999, a draft of amendments to maximum admissible intensities (MAI) of electromagnetic fields (0 Hz-300 GHz) was prepared by Professor H. Korniewicz of the Central Institute for Labour Protection, Warsaw, in cooperation with the Nofer Institute of Occupational Medicine, Łódź (radio- and microwaves) and the Military Institute of Hygiene and Epidemiology, Warsaw (pulse radiation). Before 2000, the development of the national MAI guidelines for the frequency range of 0.1 MHz-300 GHz was based on the knowledge of biological and health effects of EMF exposure available on the turn of the 1960s. A current basis for establishing the MAI international standards is a well-documented thermal effect measured by the value of a specific absorption rate (SAR), whereas the effects of resonant absorption imposes the nature of the functional dependency on EMF frequency. The Russian standards, already thoroughly analyzed, still take so-called non-thermal effects and the conception of energetic load for a work-shift with its progressive averaging (see hazardous zone in Polish guidelines) as a basis for setting maximum admissible intensities. The World Health Organization recommends a harmonization of the EMF protection guidelines, existing in different countries, with the guidelines of the International Commission for Non-Ionizing Radiation Protection (ICNIRP), and its position is supported by the European Union.

  17. Photovoltaic High-Frequency Pulse Charger for Lead-Acid Battery under Maximum Power Point Tracking

    Directory of Open Access Journals (Sweden)

    Hung-I. Hsieh

    2013-01-01

    Full Text Available A photovoltaic pulse charger (PV-PC using high-frequency pulse train for charging lead-acid battery (LAB is proposed not only to explore the charging behavior with maximum power point tracking (MPPT but also to delay sulfating crystallization on the electrode pores of the LAB to prolong the battery life, which is achieved due to a brief pulse break between adjacent pulses that refreshes the discharging of LAB. Maximum energy transfer between the PV module and a boost current converter (BCC is modeled to maximize the charging energy for LAB under different solar insolation. A duty control, guided by a power-increment-aided incremental-conductance MPPT (PI-INC MPPT, is implemented to the BCC that operates at maximum power point (MPP against the random insolation. A 250 W PV-PC system for charging a four-in-series LAB (48 Vdc is examined. The charging behavior of the PV-PC system in comparison with that of CC-CV charger is studied. Four scenarios of charging statuses of PV-BC system under different solar insolation changes are investigated and compared with that using INC MPPT.

  18. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  19. Sampling frequency affects ActiGraph activity counts

    DEFF Research Database (Denmark)

    Brønd, Jan Christian; Arvidsson, Daniel

    that is normally performed at frequencies higher than 2.5 Hz. With the ActiGraph model GT3X one has the option to select sample frequency from 30 to 100 Hz. This study investigated the effect of the sampling frequency on the ouput of the bandpass filter.Methods: A synthetic frequency sweep of 0-15 Hz was generated...... in Matlab and sampled at frequencies of 30-100 Hz. Also, acceleration signals during indoor walking and running were sampled at 30 Hz using the ActiGraph GT3X and resampled in Matlab to frequencies of 40-100 Hz. All data was processed with the ActiLife software.Results: Acceleration frequencies between 5......-15 Hz escaped the bandpass filter when sampled at 40, 50, 70, 80 and 100 Hz, while this was not the case when sampled at 30, 60 and 90 Hz. During the ambulatory activities this artifact resultet in different activity count output from the ActiLife software with different sampling frequency...

  20. A software sampling frequency adaptive algorithm for reducing spectral leakage

    Institute of Scientific and Technical Information of China (English)

    PAN Li-dong; WANG Fei

    2006-01-01

    Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.

  1. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  2. SNP calling, genotype calling, and sample allele frequency estimation from new-generation sequencing data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    We present a statistical framework for estimation and application of sample allele frequency spectra from New-Generation Sequencing (NGS) data. In this method, we first estimate the allele frequency spectrum using maximum likelihood. In contrast to previous methods, the likelihood function is cal...... be extended to various other cases including cases with deviations from Hardy-Weinberg equilibrium. We evaluate the statistical properties of the methods using simulations and by application to a real data set....

  3. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  4. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  5. Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael

    2016-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.

  6. Radio Frequency Transistors Using Aligned Semiconducting Carbon Nanotubes with Current-Gain Cutoff Frequency and Maximum Oscillation Frequency Simultaneously Greater than 70 GHz.

    Science.gov (United States)

    Cao, Yu; Brady, Gerald J; Gui, Hui; Rutherglen, Chris; Arnold, Michael S; Zhou, Chongwu

    2016-07-26

    In this paper, we report record radio frequency (RF) performance of carbon nanotube transistors based on combined use of a self-aligned T-shape gate structure, and well-aligned, high-semiconducting-purity, high-density polyfluorene-sorted semiconducting carbon nanotubes, which were deposited using dose-controlled, floating evaporative self-assembly method. These transistors show outstanding direct current (DC) performance with on-current density of 350 μA/μm, transconductance as high as 310 μS/μm, and superior current saturation with normalized output resistance greater than 100 kΩ·μm. These transistors create a record as carbon nanotube RF transistors that demonstrate both the current-gain cutoff frequency (ft) and the maximum oscillation frequency (fmax) greater than 70 GHz. Furthermore, these transistors exhibit good linearity performance with 1 dB gain compression point (P1dB) of 14 dBm and input third-order intercept point (IIP3) of 22 dBm. Our study advances state-of-the-art of carbon nanotube RF electronics, which have the potential to be made flexible and may find broad applications for signal amplification, wireless communication, and wearable/flexible electronics.

  7. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  9. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  10. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  11. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  12. Dental anthropology of a Brazilian sample: Frequency of nonmetric traits.

    Science.gov (United States)

    Tinoco, Rachel Lima Ribeiro; Lima, Laíse Nascimento Correia; Delwing, Fábio; Francesquini, Luiz; Daruge, Eduardo

    2016-01-01

    Dental elements are valuable tools in a study of ancient populations and species, and key-features for human identification; among the dental anthropology field, nonmetric traits, standardized by ASUDAS, are closely related to ancestry. This study aimed to analyze the frequency of six nonmetric traits in a sample from Southeast Brazil, composed by 130 dental casts from individuals aged between 18 and 30, without foreign parents or grandparents. A single examiner observed the presence or absence of shoveling, Carabelli's cusp, fifth cusp, 3-cusped UM2, sixth cusp, and 4-cusped LM2. The frequencies obtained were different from the ones shown by other researches to Amerindian and South American samples, and related to European and sub-Saharan frequencies, showing the influence of this groups in the current Brazilian population. Sexual dimorphism was found in the frequencies of Carabelli's cusp, 3-cusped UM2, and sixth cusp. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. A New High Frequency Injection Method Based on Duty Cycle Shifting without Maximum Voltage Magnitude Loss

    DEFF Research Database (Denmark)

    Wang, Dong; Lu, Kaiyuan; Rasmussen, Peter Omand

    2015-01-01

    The conventional high frequency signal injection method is to superimpose a high frequency voltage signal to the commanded stator voltage before space vector modulation. Therefore, the magnitude of the voltage used for machine torque production is limited. In this paper, a new high frequency...... amplitude. This may be utilized to develop new position estimation algorithm without involving the inductance in the medium to high speed range. As an application example, a developed inductance independent position estimation algorithm using the proposed high frequency injection method is applied to drive...... injection method, in which high frequency signal is generated by shifting the duty cycle between two neighboring switching periods, is proposed. This method allows injecting a high frequency signal at half of the switching frequency without the necessity to sacrifice the machine fundamental voltage...

  14. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan

    2016-07-01

    Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return

  16. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  17. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  18. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  19. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  20. Maximum power gains of radio-frequency-driven two-energy-component tokamak reactors

    International Nuclear Information System (INIS)

    Jassby, D.L.

    1974-11-01

    Two-energy-component fusion reactors in which the suprathermal component (D) is produced by harmonic cyclotron ''runaway'' of resonant ions are considered. In one ideal case, the fast hydromagnetic wave at ω = 2ω/sub cD/ produces an energy distribution f(W) approximately constant (up to W/sub max/) that includes all deuterons, which then thermalize and react with the cold tritons. In another ideal case, f(W) approximately constant is maintained by the fast wave at ω = ω/sub cD/. If one neglects (1) direct rf input to the bulk-plasma electrons and tritons, and (2) the fact that many deuterons are not resonantly accelerated, then the maximum ideal power gain is about 0.85 Q/sub m/ in the first case and 1.05 Q/sub m/ in the second case, where Q/sub m/ is the maximum fusion gain in the beam-injection scheme (e.g., Q/sub m/ = 1.9 at T/sub e/ = 10 keV). Because of nonideal effects, the cyclotron runaway phenomenon may find its most practical use in the heating of 50:50 D--T plasmas to ignition. (auth)

  1. Examination into the maximum rotational frequency for an in-plane switched active waveplate device

    International Nuclear Information System (INIS)

    Davidson, A J; Elston, S J; Raynes, E P

    2005-01-01

    An examination of an active waveplate device using a one-dimensional model, giving numerical and analytical results, is presented. The model calculates the director and twist configuration by minimizing the free energy of the system with simple homeotropic boundary conditions. The effect of varying the in-plane electric field in both magnitude and direction is examined, and it is shown that the twist through the cell is constant in time as the field is rotated. As the electric field is rotated, the director field lags behind by an angle which increases as the frequency of the electric field rotation increases. When this angle reaches approximately π/4 the director field no longer follows the electric field in a uniform way. Using mathematical analysis it is shown that the conditions on which the director profile will fail to follow the rotating electric field depend on the frequency of electric field rotation, the magnitude of the electric field, the dielectric anisotropy and the viscosity of the liquid crystal

  2. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  3. Effects of different strength training frequencies on maximum strength, body composition and functional capacity in healthy older individuals.

    Science.gov (United States)

    Turpela, Mari; Häkkinen, Keijo; Haff, Guy Gregory; Walker, Simon

    2017-11-01

    There is controversy in the literature regarding the dose-response relationship of strength training in healthy older participants. The present study determined training frequency effects on maximum strength, muscle mass and functional capacity over 6months following an initial 3-month preparatory strength training period. One-hundred and six 64-75year old volunteers were randomly assigned to one of four groups; performing strength training one (EX1), two (EX2), or three (EX3) times per week and a non-training control (CON) group. Whole-body strength training was performed using 2-5 sets and 4-12 repetitions per exercise and 7-9 exercises per session. Before and after the intervention, maximum dynamic leg press (1-RM) and isometric knee extensor and plantarflexor strength, body composition and quadriceps cross-sectional area, as well as functional capacity (maximum 7.5m forward and backward walking speed, timed-up-and-go test, loaded 10-stair climb test) were measured. All experimental groups increased leg press 1-RM more than CON (EX1: 3±8%, EX2: 6±6%, EX3: 10±8%, CON: -3±6%, Ptraining frequency would induce greater benefit to maximum walking speed (i.e. functional capacity) despite a clear dose-response in dynamic 1-RM strength, at least when predominantly using machine weight-training. It appears that beneficial functional capacity improvements can be achieved through low frequency training (i.e. 1-2 times per week) in previously untrained healthy older participants. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Implications of Microwave Holography Using Minimum Required Frequency Samples for Weakly- and Strongly-Scattering Indications

    Science.gov (United States)

    Fallahpour, M.; Case, J. T.; Kharkovsky, S.; Zoughi, R.

    2010-01-01

    Microwave imaging techniques, an integral component of nondestructive testing and evaluation (NDTE), have received significant attention in the past decade. These techniques have included the implementation of synthetic aperture focusing (SAF) algorithms for obtaining high spatial resolution images. The next important step in these developments is the implementation of 3-D holographic imaging algorithms. These are well-known wideband imaging technique requiring a swept-frequency (i.e., wideband), which unlike SAF that is a single frequency technique, are not easily performed on a real-time basis. This is due to the fact that a significant number of data points (in the frequency domain) must be obtained within the frequency band of interest. This not only makes for a complex imaging system design, it also significantly increases the image-production time. Consequently in an attempt to reduce the measurement time and system complexity, an investigation was conducted to determine the minimum required number of frequency samples needed to image a specific object while preserving a desired maximum measurement range and range resolution. To this end the 3-D holographic algorithm was modified to use properlyinterpolated frequency data. Measurements of the complex reflection coefficient for several samples were conducted using a swept-frequency approach. Subsequently, holographical images were generated using data containing a relatively large number of frequency samples and were compared with images generated by the reduced data set data. Quantitative metrics such as average, contrast, and signal-to-noise ratio were used to evaluate the quality of images generated using reduced data sets. Furthermore, this approach was applied to both weakly- and strongly-scattering indications. This paper presents the methods used and the results of this investigation.

  5. Reducing the sampling frequency of groundwater monitoring wells

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, V.M.; Ridley, M.N. [Lawrence Livermore National Lab., CA (United States); Tuckfield, R.C.; Anderson, R.A. [Westinghouse, Savannah River Co., Aiken, SC (United States)

    1996-01-01

    As part of a joint LLNL/SRTC project, a methodology for selecting sampling frequencies is evolving that introduces statistical thinking and cost effectiveness into the sampling schedule selection practices now commonly employed on environmental projects. Our current emphasis is on descriptive rather than inferential statistics. Environmental monitoring data are inherently messy, being plagued by such problems as extremely high variability and left-censoring. As a result, real data often fail to meet the assumptions required for the appropriate application of many statistical methods. Rather than abandon the quantitative approach in these cases, however, the methodology employs simple statistical techniques to bring a measure of objectivity and reproducibility to the process. The techniques are applied within the framework of decision logic, which inrerprets the numerical results from the standpoint of chemistry-related professional judgment and the regulatory context. This paper presents the methodology`s basic concepts together with early implementation results, showing the estimated cost savings. 6 refs., 3 figs.

  6. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    Science.gov (United States)

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  7. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Science.gov (United States)

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  8. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    Science.gov (United States)

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using

  9. Evaluation of maximum voided volume in Korean children by use of a 48-h frequency volume chart.

    Science.gov (United States)

    Kim, Sun-Ouck; Kim, Kyung Do; Kim, Young Sig; Kim, Jun Mo; Moon, Du Geon; Park, Sungchan; Lee, Sang Don; Chung, Jae Min; Cho, Won Yeol

    2012-08-01

    Study Type - Diagnostic (validating cohort). Level of Evidence 2a. What's known on the subject? and What does the study add? The relationship between the maximum voided volume followed a linear curve. The formula presented, bladder capacity (mL) = 12 ×[age (years) + 11], is thought to be a reasonable one for Korean children. Korean children have a smaller bladder capacity than that reported in previous Western studies. • To develop practical guidelines for the prediction of normal bladder capacity in Korean children measured by a frequency volume chart (FVC), maximum voided volume (MVV) is an important factor in the diagnosis of children with abnormal voiding function. • In all, 298 children, aged 3-13 years, with no history of voiding disorders volunteered for the study. The MVV was determined in 219 subjects by use of a completely recorded FVC. • Linear regression analysis was used to define the exact relationship between age and bladder capacity. An approximate formula related age to bladder capacity as follows: bladder capacity (mL) = 12 ×[age (years) + 11]. • The relationship between the MVV measured by a FVC by age (3-13 years) of Korean children followed a linear curve. • When applied to normal voiding patterns, the formula presented appears to be a reasonable one for Korean children. © 2011 BJU INTERNATIONAL.

  10. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  11. MalHaploFreq: A computer programme for estimating malaria haplotype frequencies from blood samples

    Directory of Open Access Journals (Sweden)

    Smith Thomas A

    2008-07-01

    Full Text Available Abstract Background Molecular markers, particularly those associated with drug resistance, are important surveillance tools that can inform policy choice. People infected with falciparum malaria often contain several genetically-distinct clones of the parasite; genotyping the patients' blood reveals whether or not the marker is present (i.e. its prevalence, but does not reveal its frequency. For example a person with four malaria clones may contain both mutant and wildtype forms of a marker but it is not possible to distinguish the relative frequencies of the mutant and wildtypes i.e. 1:3, 2:2 or 3:1. Methods An appropriate method for obtaining frequencies from prevalence data is by Maximum Likelihood analysis. A computer programme has been developed that allows the frequency of markers, and haplotypes defined by up to three codons, to be estimated from blood phenotype data. Results The programme has been fully documented [see Additional File 1] and provided with a user-friendly interface suitable for large scale analyses. It returns accurate frequencies and 95% confidence intervals from simulated dataset sets and has been extensively tested on field data sets. Additional File 1 User manual for MalHaploFreq. Click here for file Conclusion The programme is included [see Additional File 2] and/or may be freely downloaded from 1. It can then be used to extract molecular marker and haplotype frequencies from their prevalence in human blood samples. This should enhance the use of frequency data to inform antimalarial drug policy choice. Additional File 2 executable programme compiled for use on DOS or windows Click here for file

  12. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

  13. A Comparative Frequency Analysis of Maximum Daily Rainfall for a SE Asian Region under Current and Future Climate Conditions

    Directory of Open Access Journals (Sweden)

    Velautham Daksiya

    2017-01-01

    Full Text Available The impact of changing climate on the frequency of daily rainfall extremes in Jakarta, Indonesia, is analysed and quantified. The study used three different models to assess the changes in rainfall characteristics. The first method involves the use of the weather generator LARS-WG to quantify changes between historical and future daily rainfall maxima. The second approach consists of statistically downscaling general circulation model (GCM output based on historical empirical relationships between GCM output and station rainfall. Lastly, the study employed recent statistically downscaled global gridded rainfall projections to characterize climate change impact rainfall structure. Both annual and seasonal rainfall extremes are studied. The results show significant changes in annual maximum daily rainfall, with an average increase as high as 20% in the 100-year return period daily rainfall. The uncertainty arising from the use of different GCMs was found to be much larger than the uncertainty from the emission scenarios. Furthermore, the annual and wet seasonal analyses exhibit similar behaviors with increased future rainfall, but the dry season is not consistent across the models. The GCM uncertainty is larger in the dry season compared to annual and wet season.

  14. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  15. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  16. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. Measurement of the Maximum Frequency of Electroglottographic Fluctuations in the Expiration Phase of Volitional Cough as a Functional Test for Cough Efficiency.

    Science.gov (United States)

    Iwahashi, Toshihiko; Ogawa, Makoto; Hosokawa, Kiyohito; Kato, Chieri; Inohara, Hidenori

    2017-10-01

    The hypotheses of the present study were that the maximum frequency of fluctuation of electroglottographic (EGG) signals in the expiration phase of volitional cough (VC) reflects the cough efficiency and that this EGG parameter is affected by impaired laryngeal closure, expiratory effort strength, and gender. For 20 normal healthy adults and 20 patients diagnosed with unilateral vocal fold paralysis (UVFP), each participant was fitted with EGG electrodes on the neck, had a transnasal laryngo-fiberscope inserted, and was asked to perform weak/strong VC tasks while EGG signals and a high-speed digital image of the larynx were recorded. The maximum frequency was calculated in the EGG fluctuation region coinciding with vigorous vocal fold vibration in the laryngeal HSDIs. In addition, each participant underwent spirometry for measurement of three aerodynamic parameters, including peak expiratory air flow (PEAF), during weak/strong VC tasks. Significant differences were found for both maximum EGG frequency and PEAF between the healthy and UVFP groups and between the weak and strong VC tasks. Among the three cough aerodynamic parameters, PEAF showed the highest positive correlation with the maximum EGG frequency. The correlation coefficients between the maximum EGG frequency and PEAF recorded simultaneously were 0.574 for the whole group, and 0.782/0.717/0.823/0.688 for the male/female/male-healthy/male-UVFP subgroups, respectively. Consequently, the maximum EGG frequency measured in the expiration phase of VC was shown to reflect the velocity of expiratory airflow to some extent and was suggested to be affected by vocal fold physical properties, glottal closure condition, and the expiratory function.

  18. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  19. 7 CFR 58.643 - Frequency of sampling.

    Science.gov (United States)

    2010-01-01

    ... AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT (CONTINUED) GRADING AND INSPECTION... each type of mix, and for the finished frozen product one sample from each flavor made. (b) Composition... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards...

  20. Evaluation of the Frequency for Gas Sampling for the High Burnup Confirmatory Data Project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marschman, Steven C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-05-01

    This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations.

  1. Releasable activity and maximum permissible leakage rate within a transport cask of Tehran Research Reactor fuel samples

    Directory of Open Access Journals (Sweden)

    Rezaeian Mahdi

    2015-01-01

    Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.

  2. A Dictionary of Basic Pashto Frequency List I, Project Description and Samples, and Frequency List II.

    Science.gov (United States)

    Heston, Wilma

    The three-volume set of materials describes and presents the results to date of a federally-funded project to develop Pashto-English and English-Pashto dictionaries. The goal was to produce a list of 12,000 basic Pashto words for English-speaking users. Words were selected based on frequency in various kinds of oral and written materials, and were…

  3. Maximum generation power evaluation of variable frequency offshore wind farms when connected to a single power converter

    Energy Technology Data Exchange (ETDEWEB)

    Gomis-Bellmunt, Oriol; Sumper, Andreas [Centre d' Innovacio Tecnologica en Convertidors Estatics i Accionaments (CITCEA-UPC), Universitat Politecnica de Catalunya UPC, Av. Diagonal, 647, Pl. 2, 08028 Barcelona (Spain); IREC Catalonia Institute for Energy Research, Barcelona (Spain); Junyent-Ferre, Adria; Galceran-Arellano, Samuel [Centre d' Innovacio Tecnologica en Convertidors Estatics i Accionaments (CITCEA-UPC), Universitat Politecnica de Catalunya UPC, Av. Diagonal, 647, Pl. 2, 08028 Barcelona (Spain)

    2010-10-15

    The paper deals with the evaluation of power generated by variable and constant frequency offshore wind farms connected to a single large power converter. A methodology to analyze different wind speed scenarios and system electrical frequencies is presented and applied to a case study, where it is shown that the variable frequency wind farm concept (VF) with a single power converter obtains 92% of the total available power, obtained with individual power converters in each wind turbine (PC). The PC scheme needs multiple power converters implying drawbacks in terms of cost, maintenance and reliability. The VF scheme is also compared to a constant frequency scheme CF, and it is shown that a significant power increase of more than 20% can be obtained with VF. The case study considers a wind farm composed of four wind turbines based on synchronous generators. (author)

  4. Critical frequency and maximum electron density of F2 region over four stations in the North American sector

    Czech Academy of Sciences Publication Activity Database

    Ezquer, R. G.; Cabrera, M. A.; López, J. L.; Albornoz, M. R.; Mosert, M.; Marcó, P.; Burešová, Dalia

    2011-01-01

    Roč. 73, č. 4 (2011), s. 420-429 ISSN 1364-6826 Institutional research plan: CEZ:AV0Z30420517 Keywords : Ionosphere * F2 region * Critical frequency * Electron density * Model Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.596, year: 2011 http://www.sciencedirect.com/science/article/pii/S1364682610002786

  5. Draft evaluation of the frequency for gas sampling for the high burnup confirmatory data project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-26

    This report fulfills the M3 milestone M3FT-15SN0802041, “Draft Evaluation of the Frequency for Gas Sampling for the High Burn-up Storage Demonstration Project” under Work Package FT-15SN080204, “ST Field Demonstration Support – SNL”. This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations. Gas sampling will provide information on the presence of residual water (and byproducts associated with its reactions and decomposition) and breach of cladding, which could inform the decision of when to open the project cask.

  6. Sampling frequency of ciliated protozoan microfauna for seasonal distribution research in marine ecosystems.

    Science.gov (United States)

    Xu, Henglong; Yong, Jiang; Xu, Guangjian

    2015-12-30

    Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. The Importance of Pressure Sampling Frequency in Models for Determination of Critical Wave Loadingson Monolithic Structures

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke; Meinert, Palle

    2008-01-01

    This paper discusses the influence of wave load sampling frequency on calculated sliding distance in an overall stability analysis of a monolithic caisson. It is demonstrated by a specific example of caisson design that for this kind of analyses the sampling frequency in a small scale model could...... be as low as 100 Hz in model scale. However, for design of structure elements like the wave wall on the top of a caisson the wave load sampling frequency must be much higher, in the order of 1000 Hz in the model. Elastic-plastic deformations of foundation and structure were not included in the analysis....

  8. Frequency Response of the Sample Vibration Mode in Scanning Probe Acoustic Microscope

    International Nuclear Information System (INIS)

    Ya-Jun, Zhao; Qian, Cheng; Meng-Lu, Qian

    2010-01-01

    Based on the interaction mechanism between tip and sample in the contact mode of a scanning probe acoustic microscope (SPAM), an active mass of the sample is introduced in the mass-spring model. The tip motion and frequency response of the sample vibration mode in the SPAM are calculated by the Lagrange equation with dissipation function. For the silicon tip and glass assemblage in the SPAM the frequency response is simulated and it is in agreement with the experimental result. The living myoblast cells on the glass slide are imaged at resonance frequencies of the SPAM system, which are 20kHz, 30kHz and 120kHz. It is shown that good contrast of SPAM images could be obtained when the system is operated at the resonance frequencies of the system in high and low-frequency regions

  9. Gate-Recessed AlGaN/GaN MOSHEMTs with the Maximum Oscillation Frequency Exceeding 120 GHz on Sapphire Substrates

    International Nuclear Information System (INIS)

    Kong Xin; Wei Ke; Liu Guo-Guo; Liu Xin-Yu

    2012-01-01

    Gate-recessed AlGaN/GaN metal-oxide-semiconductor high electron mobility transistors (MOSHEMTs) on sapphire substrates are fabricated. The devices with a gate length of 160 nm and a gate periphery of 2 × 75 μm exhibit two orders of magnitude reduction in gate leakage current and enhanced off-state breakdown characteristics, compared with conventional HEMTs. Furthermore, the extrinsic transconductance of an MOSHEMT is 237.2 mS/mm, only 7% lower than that of Schottky-gate HEMT. An extrinsic current gain cutoff frequency f T of 65 GHz and a maximum oscillation frequency f max of 123 GHz are deduced from rf small signal measurements. The high f max demonstrates that gate-recessed MOSHEMTs are of great potential in millimeter wave frequencies. (cross-disciplinary physics and related areas of science and technology)

  10. Frequency Mixing Magnetic Detection Scanner for Imaging Magnetic Particles in Planar Samples.

    Science.gov (United States)

    Hong, Hyobong; Lim, Eul-Gyoon; Jeong, Jae-Chan; Chang, Jiho; Shin, Sung-Woong; Krause, Hans-Joachim

    2016-06-09

    The setup of a planar Frequency Mixing Magnetic Detection (p-FMMD) scanner for performing Magnetic Particles Imaging (MPI) of flat samples is presented. It consists of two magnetic measurement heads on both sides of the sample mounted on the legs of a u-shaped support. The sample is locally exposed to a magnetic excitation field consisting of two distinct frequencies, a stronger component at about 77 kHz and a weaker field at 61 Hz. The nonlinear magnetization characteristics of superparamagnetic particles give rise to the generation of intermodulation products. A selected sum-frequency component of the high and low frequency magnetic field incident on the magnetically nonlinear particles is recorded by a demodulation electronics. In contrast to a conventional MPI scanner, p-FMMD does not require the application of a strong magnetic field to the whole sample because mixing of the two frequencies occurs locally. Thus, the lateral dimensions of the sample are just limited by the scanning range and the supports. However, the sample height determines the spatial resolution. In the current setup it is limited to 2 mm. As examples, we present two 20 mm × 25 mm p-FMMD images acquired from samples with 1 µm diameter maghemite particles in silanol matrix and with 50 nm magnetite particles in aminosilane matrix. The results show that the novel MPI scanner can be applied for analysis of thin biological samples and for medical diagnostic purposes.

  11. Symbol synchronization and sampling frequency synchronization techniques in real-time DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Cao, Zizheng; Tang, Jin; Chen, Lin; Wu, Xian

    2014-09-01

    In this paper, we propose and experimentally demonstrate a symbol synchronization and sampling frequency synchronization techniques in real-time direct-detection optical orthogonal frequency division multiplexing (DDO-OFDM) system, over 100-km standard single mode fiber (SSMF) using a cost-effective directly modulated distributed feedback (DFB) laser. The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm. Meanwhile, the proposed pilot-assisted sampling frequency synchronization between digital-to-analog converter (DAC) and analog-to-digital converter (ADC) is capable of estimating SFOs with an accuracy of technique can also compensate SFO effects within a small residual SFO caused by deviation of SFO estimation and low-precision or unstable clock source. The two synchronization techniques are suitable for high-speed DDO-OFDM transmission systems.

  12. Full on-chip and area-efficient CMOS LDO with zero to maximum load stability using adaptive frequency compensation

    Energy Technology Data Exchange (ETDEWEB)

    Ma Haifeng; Zhou Feng, E-mail: fengzhou@fudan.edu.c [State Key Laboratory of ASIC and System, Fudan University, Shanghai 201203 (China)

    2010-01-15

    A full on-chip and area-efficient low-dropout linear regulator (LDO) is presented. By using the proposed adaptive frequency compensation (AFC) technique, full on-chip integration is achieved without compromising the LDO's stability in the full output current range. Meanwhile, the use of a compact pass transistor (the compact pass transistor serves as the gain fast roll-off output stage in the AFC technique) has enabled the LDO to be very area-efficient. The proposed LDO is implemented in standard 0.35 {mu}m CMOS technology and occupies an active area as small as 220 x 320 {mu}m{sup 2}, which is a reduction to 58% compared to state-of-the-art designs using technologies with the same feature size. Measurement results show that the LDO can deliver 0-60 mA output current with 54 {mu}A quiescent current consumption and the regulated output voltage is 1.8 V with an input voltage range from 2 to 3.3 V. (semiconductor integrated circuits)

  13. Full on-chip and area-efficient CMOS LDO with zero to maximum load stability using adaptive frequency compensation

    International Nuclear Information System (INIS)

    Ma Haifeng; Zhou Feng

    2010-01-01

    A full on-chip and area-efficient low-dropout linear regulator (LDO) is presented. By using the proposed adaptive frequency compensation (AFC) technique, full on-chip integration is achieved without compromising the LDO's stability in the full output current range. Meanwhile, the use of a compact pass transistor (the compact pass transistor serves as the gain fast roll-off output stage in the AFC technique) has enabled the LDO to be very area-efficient. The proposed LDO is implemented in standard 0.35 μm CMOS technology and occupies an active area as small as 220 x 320 μm 2 , which is a reduction to 58% compared to state-of-the-art designs using technologies with the same feature size. Measurement results show that the LDO can deliver 0-60 mA output current with 54 μA quiescent current consumption and the regulated output voltage is 1.8 V with an input voltage range from 2 to 3.3 V. (semiconductor integrated circuits)

  14. Effect of Sampling Frequency for Real-Time Tablet Coating Monitoring Using Near Infrared Spectroscopy.

    Science.gov (United States)

    Igne, Benoît; Arai, Hiroaki; Drennen, James K; Anderson, Carl A

    2016-09-01

    While the sampling of pharmaceutical products typically follows well-defined protocols, the parameterization of spectroscopic methods and their associated sampling frequency is not standard. Whereas, for blending, the sampling frequency is limited by the nature of the process, in other processes, such as tablet film coating, practitioners must determine the best approach to collecting spectral data. The present article studied how sampling practices affected the interpretation of the results provided by a near-infrared spectroscopy method for the monitoring of tablet moisture and coating weight gain during a pan-coating experiment. Several coating runs were monitored with different sampling frequencies (with or without co-adds (also known as sub-samples)) and with spectral averaging corresponding to processing cycles (1 to 15 pan rotations). Beyond integrating the sensor into the equipment, the present work demonstrated that it is necessary to have a good sense of the underlying phenomena that have the potential to affect the quality of the signal. The effects of co-adds and averaging was significant with respect to the quality of the spectral data. However, the type of output obtained from a sampling method dictated the type of information that one can gain on the dynamics of a process. Thus, different sampling frequencies may be needed at different stages of process development. © The Author(s) 2016.

  15. Using high-frequency sampling to detect effects of atmospheric pollutants on stream chemistry

    Science.gov (United States)

    Stephen D. Sebestyen; James B. Shanley; Elizabeth W. Boyer

    2009-01-01

    We combined information from long-term (weekly over many years) and short-term (high-frequency during rainfall and snowmelt events) stream water sampling efforts to understand how atmospheric deposition affects stream chemistry. Water samples were collected at the Sleepers River Research Watershed, VT, a temperate upland forest site that receives elevated atmospheric...

  16. Impact of sampling frequency in the analysis of tropospheric ozone observations

    Directory of Open Access Journals (Sweden)

    M. Saunois

    2012-08-01

    Full Text Available Measurements of ozone vertical profiles are valuable for the evaluation of atmospheric chemistry models and contribute to the understanding of the processes controlling the distribution of tropospheric ozone. The longest record of ozone vertical profiles is provided by ozone sondes, which have a typical frequency of 4 to 12 profiles a month. Here we quantify the uncertainty introduced by low frequency sampling in the determination of means and trends. To do this, the high frequency MOZAIC (Measurements of OZone, water vapor, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft profiles over airports, such as Frankfurt, have been subsampled at two typical ozone sonde frequencies of 4 and 12 profiles per month. We found the lowest sampling uncertainty on seasonal means at 700 hPa over Frankfurt, with around 5% for a frequency of 12 profiles per month and 10% for a 4 profile-a-month frequency. However the uncertainty can reach up to 15 and 29% at the lowest altitude levels. As a consequence, the sampling uncertainty at the lowest frequency could be higher than the typical 10% accuracy of the ozone sondes and should be carefully considered for observation comparison and model evaluation. We found that the 95% confidence limit on the seasonal mean derived from the subsample created is similar to the sampling uncertainty and suggest to use it as an estimate of the sampling uncertainty. Similar results are found at six other Northern Hemisphere sites. We show that the sampling substantially impacts on the inter-annual variability and the trend derived over the period 1998–2008 both in magnitude and in sign throughout the troposphere. Also, a tropical case is discussed using the MOZAIC profiles taken over Windhoek, Namibia between 2005 and 2008. For this site, we found that the sampling uncertainty in the free troposphere is around 8 and 12% at 12 and 4 profiles a month respectively.

  17. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  18. Variable Sampling Composite Observer Based Frequency Locked Loop and its Application in Grid Connected System

    Directory of Open Access Journals (Sweden)

    ARUN, K.

    2016-05-01

    Full Text Available A modified digital signal processing procedure is described for the on-line estimation of DC, fundamental and harmonics of periodic signal. A frequency locked loop (FLL incorporated within the parallel structure of observers is proposed to accommodate a wide range of frequency drift. The error in frequency generated under drifting frequencies has been used for changing the sampling frequency of the composite observer, so that the number of samples per cycle of the periodic waveform remains constant. A standard coupled oscillator with automatic gain control is used as numerically controlled oscillator (NCO to generate the enabling pulses for the digital observer. The NCO gives an integer multiple of the fundamental frequency making it suitable for power quality applications. Another observer with DC and second harmonic blocks in the feedback path act as filter and reduces the double frequency content. A systematic study of the FLL is done and a method has been proposed to design the controller. The performance of FLL is validated through simulation and experimental studies. To illustrate applications of the new FLL, estimation of individual harmonics from nonlinear load and the design of a variable sampling resonant controller, for a single phase grid-connected inverter have been presented.

  19. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  20. Experimental Determination of Operating and Maximum Power Transfer Efficiencies at Resonant Frequency in a Wireless Power Transfer System using PP Network Topology with Top Coupling

    Science.gov (United States)

    Ramachandran, Hema; Pillai, K. P. P.; Bindu, G. R.

    2017-08-01

    A two-port network model for a wireless power transfer system taking into account the distributed capacitances using PP network topology with top coupling is developed in this work. The operating and maximum power transfer efficiencies are determined analytically in terms of S-parameters. The system performance predicted by the model is verified with an experiment consisting of a high power home light load of 230 V, 100 W and is tested for two forced resonant frequencies namely, 600 kHz and 1.2 MHz. The experimental results are in close agreement with the proposed model.

  1. Implementation of PLL and FLL trackers for signals with high harmonic content and low sampling frequency

    DEFF Research Database (Denmark)

    Mathe, Laszlo; Iov, Florin; Sera, Dezso

    2014-01-01

    The accurate tracking of phase, frequency, and amplitude of different frequency components from a measured signal is an essential requirement for many digitally controlled equipment. The accurate and robust tracking of a frequency component from a complex signal was successfully applied for example...... in: grid connected inverters, sensorless motor control for rotor position estimation, grid voltage monitoring for ac-dc converters etc. Usually, the design of such trackers is done in continuous time domain. The discretization introduces errors which change the performance, especially when the input...... signal is rich in harmonics and the sampling frequency is close to the tracked frequency component. In this paper different discretization methods and implementation issues, such as Tustin, Backward-Forward Euler, are discussed and compared. A special case is analyzed, when the input signal is reach...

  2. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  3. An extension of command shaping methods for controlling residual vibration using frequency sampling

    Science.gov (United States)

    Singer, Neil C.; Seering, Warren P.

    1992-01-01

    The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.

  4. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  6. FREQUENCY OF ANEUPLOID SPERMATOZOA STUDIED BY MULTICOLOR FISH IN SERIAL SEMEN SAMPLES

    Science.gov (United States)

    Frequency of aneuploid spermatozoa studied by multicolor FISH in serial semen samplesM. Vozdova1, S. D. Perreault2, O. Rezacova1, D. Zudova1 , Z. Zudova3, S. G. Selevan4, J. Rubes1,51Veterinary Research Institute, Brno, Czech Republic; 2U.S. Environmental Protection A...

  7. On the Berry-Esséen bound of frequency polygons for ϕ-mixing samples.

    Science.gov (United States)

    Huang, Gan-Ji; Xing, Guodong

    2017-01-01

    Under some mild assumptions, the Berry-Esséen bound of frequency polygons for ϕ -mixing samples is presented. By the bound derived, we obtain the corresponding convergence rate of uniformly asymptotic normality, which is nearly [Formula: see text] under the given conditions.

  8. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  9. Note: Radio frequency surface impedance characterization system for superconducting samples at 7.5 GHz.

    Science.gov (United States)

    Xiao, B P; Reece, C E; Phillips, H L; Geng, R L; Wang, H; Marhauser, F; Kelley, M J

    2011-05-01

    A radio frequency (RF) surface impedance characterization (SIC) system that uses a novel sapphire-loaded niobium cavity operating at 7.5 GHz has been developed as a tool to measure the RF surface impedance of flat superconducting material samples. The SIC system can presently make direct calorimetric RF surface impedance measurements on the central 0.8 cm(2) area of 5 cm diameter disk samples from 2 to 20 K exposed to RF magnetic fields up to 14 mT. To illustrate system utility, we present first measurement results for a bulk niobium sample.

  10. Estimating an appropriate sampling frequency for monitoring ground water well contamination

    International Nuclear Information System (INIS)

    Tuckfield, R.C.

    1994-01-01

    Nearly 1,500 ground water wells at the Savannah River Site (SRS) are sampled quarterly to monitor contamination by radionuclides and other hazardous constituents from nearby waste sites. Some 10,000 water samples were collected in 1993 at a laboratory analysis cost of $10,000,000. No widely accepted statistical method has been developed, to date, for estimating a technically defensible ground water sampling frequency consistent and compliant with federal regulations. Such a method is presented here based on the concept of statistical independence among successively measured contaminant concentrations in time

  11. Distortions in frequency spectra of signals associated with sampling-pulse shapes

    International Nuclear Information System (INIS)

    Njau, E.C.

    1983-04-01

    A method developed earlier by the author [IC/82/44; IC/82/45] is used to investigate distortions introduced into frequency spectra of signals by the shapes of the sampling pulses involved. Conditions are established under which the use of trapezoid or exponentially-edged pulses to digitize signals can make the frequency spectra of the resultant data samples devoid of the main features of the signals. This observation does not, however, apply in any way to cosinusoidally-edged pulses or to pulses with cosine-squared edges. Since parts of the Earth's surface and atmosphere receive direct solar energy in discrete samples (i.e. only from sunrise to sunset) we have extended the technique and attempted to develop a theory that explains the observed solar terrestrial relationships. A very good agreement is obtained between the theory and previous long-term and short-term observations. (author)

  12. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  13. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency

    Directory of Open Access Journals (Sweden)

    Sérgio Luiz Gomes Antunes

    2012-03-01

    Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  14. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency.

    Science.gov (United States)

    Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes

    2012-03-01

    Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  15. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  16. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    Science.gov (United States)

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  17. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  18. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  19. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    Science.gov (United States)

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  20. Fiber optics frequency comb enabled linear optical sampling with operation wavelength range extension.

    Science.gov (United States)

    Liao, Ruolin; Wu, Zhichao; Fu, Songnian; Zhu, Shengnan; Yu, Zhe; Tang, Ming; Liu, Deming

    2018-02-01

    Although the linear optical sampling (LOS) technique is powerful enough to characterize various advanced modulation formats with high symbol rates, the central wavelength of a pulsed local oscillator (LO) needs to be carefully set according to that of the signal under test, due to the coherent mixing operation. Here, we experimentally demonstrate wideband LOS enabled by a fiber optics frequency comb (FOFC). Meanwhile, when the broadband FOFC acts as the pulsed LO, we propose a scheme to mitigate the enhanced sampling error arising in the non-ideal response of a balanced photodetector. Finally, precise characterizations of arbitrary 128 Gbps PDM-QPSK wavelength channels from 1550 to 1570 nm are successfully achieved, when a 101.3 MHz frequency spaced comb with a 3 dB spectral power ripple of 20 nm is used.

  1. Measuring saccade peak velocity using a low-frequency sampling rate of 50 Hz.

    Science.gov (United States)

    Wierts, Roel; Janssen, Maurice J A; Kingma, Herman

    2008-12-01

    During the last decades, small head-mounted video eye trackers have been developed in order to record eye movements. Real-time systems-with a low sampling frequency of 50/60 Hz-are used for clinical vestibular practice, but are generally considered not to be suited for measuring fast eye movements. In this paper, it is shown that saccadic eye movements, having an amplitude of at least 5 degrees, can, in good approximation, be considered to be bandwidth limited up to a frequency of 25-30 Hz. Using the Nyquist theorem to reconstruct saccadic eye movement signals at higher temporal resolutions, it is shown that accurate values for saccade peak velocities, recorded at 50 Hz, can be obtained, but saccade peak accelerations and decelerations cannot. In conclusion, video eye trackers sampling at 50/60 Hz are appropriate for detecting the clinical relevant saccade peak velocities in contrast to what has been stated up till now.

  2. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  3. The T-lock: automated compensation of radio-frequency induced sample heating

    International Nuclear Information System (INIS)

    Hiller, Sebastian; Arthanari, Haribabu; Wagner, Gerhard

    2009-01-01

    Modern high-field NMR spectrometers can stabilize the nominal sample temperature at a precision of less than 0.1 K. However, the actual sample temperature may differ from the nominal value by several degrees because the sample heating caused by high-power radio frequency pulses is not readily detected by the temperature sensors. Without correction, transfer of chemical shifts between different experiments causes problems in the data analysis. In principle, the temperature differences can be corrected by manual procedures but this is cumbersome and not fully reliable. Here, we introduce the concept of a 'T-lock', which automatically maintains the sample at the same reference temperature over the course of different NMR experiments. The T-lock works by continuously measuring the resonance frequency of a suitable spin and simultaneously adjusting the temperature control, thus locking the sample temperature at the reference value. For three different nuclei, 13 C, 17 O and 31 P in the compounds alanine, water, and phosphate, respectively, the T-lock accuracy was found to be <0.1 K. The use of dummy scan periods with variable lengths allows a reliable establishment of the thermal equilibrium before the acquisition of an experiment starts

  4. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  5. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  6. Flux pinning characteristics in cylindrical niobium samples used for superconducting radio frequency cavity fabrication

    Science.gov (United States)

    Dhavale, Asavari S.; Dhakal, Pashupati; Polyanskii, Anatolii A.; Ciovati, Gianluigi

    2012-06-01

    We present the results from DC magnetization and penetration depth measurements of cylindrical bulk large-grain (LG) and fine-grain (FG) niobium samples used for the fabrication of superconducting radio frequency (SRF) cavities. The surface treatment consisted of electropolishing and low-temperature baking as they are typically applied to SRF cavities. The magnetization data are analyzed using a modified critical state model. The critical current density Jc and pinning force Fp are calculated from the magnetization data and their temperature dependence and field dependence are presented. The LG samples have lower critical current density and pinning force density compared to FG samples, favorable to lower flux trapping efficiency. This effect may explain the lower values of residual resistance often observed in LG cavities than FG cavities.

  7. Flux pinning characteristics in cylindrical niobium samples used for superconducting radio frequency cavity fabrication

    International Nuclear Information System (INIS)

    Dhavale, Asavari S; Dhakal, Pashupati; Ciovati, Gianluigi; Polyanskii, Anatolii A

    2012-01-01

    We present the results from DC magnetization and penetration depth measurements of cylindrical bulk large-grain (LG) and fine-grain (FG) niobium samples used for the fabrication of superconducting radio frequency (SRF) cavities. The surface treatment consisted of electropolishing and low-temperature baking as they are typically applied to SRF cavities. The magnetization data are analyzed using a modified critical state model. The critical current density J c and pinning force F p are calculated from the magnetization data and their temperature dependence and field dependence are presented. The LG samples have lower critical current density and pinning force density compared to FG samples, favorable to lower flux trapping efficiency. This effect may explain the lower values of residual resistance often observed in LG cavities than FG cavities. (paper)

  8. High frequency of parvovirus B19 DNA in bone marrow samples from rheumatic patients

    DEFF Research Database (Denmark)

    Lundqvist, Anders; Isa, Adiba; Tolfvenstam, Thomas

    2005-01-01

    BACKGROUND: Human parvovirus B19 (B19) polymerase chain reaction (PCR) is now a routine analysis and serves as a diagnostic marker as well as a complement or alternative to B19 serology. The clinical significance of a positive B19 DNA finding is however dependent on the type of tissue or body fluid...... analysed and of the immune status of the patient. OBJECTIVES: To analyse the clinical significance of B19 DNA positivity in bone marrow samples from rheumatic patients. STUDY DESIGN: Parvovirus B19 DNA was analysed in paired bone marrow and serum samples by nested PCR technique. Serum was also analysed...... negative group. A high frequency of parvovirus B19 DNA was thus detected in bone marrow samples in rheumatic patients. The clinical data does not support a direct association between B19 PCR positivity and rheumatic disease manifestation. Therefore, the clinical significance of B19 DNA positivity in bone...

  9. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  10. Efficient computation of the joint sample frequency spectra for multiple populations.

    Science.gov (United States)

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  11. Frequency, Antimicrobial Resistance and Genetic Diversity of Klebsiella pneumoniae in Food Samples.

    Directory of Open Access Journals (Sweden)

    Yumei Guo

    Full Text Available This study aimed to assess the frequency of Klebsiella pneumoniae in food samples and to detect antibiotic resistance phenotypes, antimicrobial resistance genes and the molecular subtypes of the recovered isolates. A total of 998 food samples were collected, and 99 (9.9% K. pneumoniae strains were isolated; the frequencies were 8.2% (4/49 in fresh raw seafood, 13.8% (26/188 in fresh raw chicken, 11.4% (34/297 in frozen raw food and 7.5% (35/464 in cooked food samples. Antimicrobial resistance was observed against 16 antimicrobials. The highest resistance rate was observed for ampicillin (92.3%, followed by tetracycline (31.3%, trimethoprim-sulfamethoxazole (18.2%, and chloramphenicol (10.1%. Two K. pneumoniae strains were identified as extended-spectrum β-lactamase (ESBL-one strain had three beta-lactamases genes (blaSHV, blaCTX-M-1, and blaCTX-M-10 and one had only the blaSHV gene. Nineteen multidrug-resistant (MDR strains were detected; the percentage of MDR strains in fresh raw chicken samples was significantly higher than in other sample types (P<0.05. Six of the 18 trimethoprim-sulfamethoxazole-resistant strains carried the folate pathway inhibitor gene (dhfr. Four isolates were screened by PCR for quinolone resistance genes; aac(6'-Ib-cr, qnrB, qnrA and qnrS were detected. In addition, gyrA gene mutations such as T247A (Ser83Ile, C248T (Ser83Phe, and A260C (Asp87Ala and a parC C240T (Ser80Ile mutation were identified. Five isolates were screened for aminoglycosides resistance genes; aacA4, aacC2, and aadA1 were detected. Pulsed-field gel electrophoresis-based subtyping identified 91 different patterns. Our results indicate that food, especially fresh raw chicken, is a reservoir of antimicrobial-resistant K. pneumoniae, and the potential health risks posed by such strains should not be underestimated. Our results demonstrated high prevalence, antibiotic resistance rate and genetic diversity of K. pneumoniae in food in China. Improved

  12. Frequency, stability and differentiation of self-reported school fear and truancy in a community sample

    Directory of Open Access Journals (Sweden)

    Metzke Christa

    2008-07-01

    Full Text Available Abstract Background Surprisingly little is known about the frequency, stability, and correlates of school fear and truancy based on self-reported data of adolescents. Methods Self-reported school fear and truancy were studied in a total of N = 834 subjects of the community-based Zurich Adolescent Psychology and Psychopathology Study (ZAPPS at two times with an average age of thirteen and sixteen years. Group definitions were based on two behavioural items of the Youth Self-Report (YSR. Comparisons included a control group without indicators of school fear or truancy. The three groups were compared across questionnaires measuring emotional and behavioural problems, life-events, self-related cognitions, perceived parental behaviour, and perceived school environment. Results The frequency of self-reported school fear decreased over time (6.9 vs. 3.6% whereas there was an increase in truancy (5.0 vs. 18.4%. Subjects with school fear displayed a pattern of associated internalizing problems and truants were characterized by associated delinquent behaviour. Among other associated psychosocial features, the distress coming from the perceived school environment in students with school fear is most noteworthy. Conclusion These findings from a community study show that school fear and truancy are frequent and display different developmental trajectories. Furthermore, previous results are corroborated which are based on smaller and selected clinical samples indicating that the two groups display distinct types of school-related behaviour.

  13. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    Science.gov (United States)

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  14. Comparison of mobile and stationary spore-sampling techniques for estimating virulence frequencies in aerial barley powdery mildew populations

    DEFF Research Database (Denmark)

    Hovmøller, M.S.; Munk, L.; Østergård, Hanne

    1995-01-01

    Gene frequencies in samples of aerial populations of barley powdery mildew (Erysiphe graminis f.sp. hordei), which were collected in adjacent barley areas and in successive periods of time, were compared using mobile and stationary sampling techniques. Stationary samples were collected from trap ...

  15. Surface analyses of electropolished niobium samples for superconducting radio frequency cavity

    International Nuclear Information System (INIS)

    Tyagi, P. V.; Nishiwaki, M.; Saeki, T.; Sawabe, M.; Hayano, H.; Noguchi, T.; Kato, S.

    2010-01-01

    The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb 2 O 5 ) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

  16. Surface analyses of electropolished niobium samples for superconducting radio frequency cavity

    Energy Technology Data Exchange (ETDEWEB)

    Tyagi, P. V.; Nishiwaki, M.; Saeki, T.; Sawabe, M.; Hayano, H.; Noguchi, T.; Kato, S. [GUAS, Tsukuba, Ibaraki 305-0801 (Japan); KEK, Tsukuba, Ibaraki 305-0801 (Japan); KAKEN Inc., Hokota, Ibaraki 311-1416 (Japan); GUAS, Tsukuba, Ibaraki 305-0801 (Japan) and KEK, Tsukuba, Ibaraki 305-0801 (Japan)

    2010-07-15

    The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb{sub 2}O{sub 5}) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

  17. Characteristic of selected frequency luminescence for samples collected in deserts north to Beijing

    International Nuclear Information System (INIS)

    Li Dongxu; Wei Mingjian; Wang Junping; Pan Baolin; Zhao Shiyuan; Liu Zhaowen

    2009-01-01

    Surface sand samples were collected in eight sites of the Horqin and Otindag deserts located in north to Beijing. BG2003 luminescence spectrograph was used to analyze the emitted photons and characteristic spectra of the selected frequency luminescence were obtained. It was found that high intensities of emitted photons stimulated by heat from 85 degree C-135 degree C and 350 degree C-400 degree C. It belong to the traps of 4.13 eV (300 nm), 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm), and the emitted photons belong to traps of 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm) were stimulated by green laser. And sand samples of the eight sites can respond to the increase of definite radiological dose at each wavelength, which is the characteristic spectrum to provide radiation dosimetry basis for dating. There are definite district characteristic in their characteristic spectra. (authors)

  18. Frequency and antimicrobial susceptibility of acinetobacter species isolated from blood samples of paediatric patients

    International Nuclear Information System (INIS)

    Javed, A.; Zafar, A.; Ejaz, H.; Zubair, M.

    2012-01-01

    Objective: Acinetobacter species is a major nosocomial pathogen causing serious infections in immuno-compromised and hospitalized patients. The aim of this study was to determine the frequency and antimicrobial susceptibility pattern of Acinetobacter species in blood samples of paediatric patients. Methodology: This cross sectional observational study was conducted during January to October, 2011 at The Children's Hospital and Institute of Child Health, Lahore. A total number of 12,032 blood samples were analysed during the study period. Acinetobacter species were Bauer disc diffusion method. Results: The blood cultures showed growth in 1,141 cultures out of which 46 (4.0%) were Acinetobacter species. The gender distribution of Acinetobacter species was 29 (63.0%) in males and 17 (37.0%) in females. A good antimicrobial susceptibility pattern of Acinetobacter species was seen with sulbactam-cefoperazone (93.0%), imepenem and meropenem (82.6% (30.4%) was poor. Conclusion: The results of the present study shows high rate of resistance of Acinetobacter species with cephalosporins in nosocomial infections. The sulbactam-cefoperazone, carbapenems and piperacillin-tazobactam showed effective antimicrobial susceptibility against Acinetobacter species. (author)

  19. Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling

    Directory of Open Access Journals (Sweden)

    Hyun-Joo Oh

    2017-01-01

    Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.

  20. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  1. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  2. Maternal obesity alters immune cell frequencies and responses in umbilical cord blood samples.

    Science.gov (United States)

    Wilson, Randall M; Marshall, Nicole E; Jeske, Daniel R; Purnell, Jonathan Q; Thornburg, Kent; Messaoudi, Ilhem

    2015-06-01

    Maternal obesity is one of the several key factors thought to modulate neonatal immune system development. Data from murine studies demonstrate worse outcomes in models of infection, autoimmunity, and allergic sensitization in offspring of obese dams. In humans, children born to obese mothers are at increased risk for asthma. These findings suggest a dysregulation of immune function in the children of obese mothers; however, the underlying mechanisms remain poorly understood. The aim of this study was to examine the relationship between maternal body weight and the human neonatal immune system. Umbilical cord blood samples were collected from infants born to lean, overweight, and obese mothers. Frequency and function of major innate and adaptive immune cell populations were quantified using flow cytometry and multiplex analysis of circulating factors. Compared to babies born to lean mothers, babies of obese mothers had fewer eosinophils and CD4 T helper cells, reduced monocyte and dendritic cell responses to Toll-like receptor ligands, and increased plasma levels of IFN-α2 and IL-6 in cord blood. These results support the hypothesis that maternal obesity influences programming of the neonatal immune system, providing a potential link to increased incidence of chronic inflammatory diseases such as asthma and cardiovascular disease in the offspring. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  4. Split Hopkinson Resonant Bar Test for Sonic-Frequency Acoustic Velocity and Attenuation Measurements of Small, Isotropic Geologic Samples

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S.

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.

  5. An increased rectal maximum tolerable volume and long anal canal are associated with poor short-term response to biofeedback therapy for patients with anismus with decreased bowel frequency and normal colonic transit time.

    Science.gov (United States)

    Rhee, P L; Choi, M S; Kim, Y H; Son, H J; Kim, J J; Koh, K C; Paik, S W; Rhee, J C; Choi, K W

    2000-10-01

    Biofeedback is an effective therapy for a majority of patients with anismus. However, a significant proportion of patients still failed to respond to biofeedback, and little has been known about the factors that predict response to biofeedback. We evaluated the factors associated with poor response to biofeedback. Biofeedback therapy was offered to 45 patients with anismus with decreased bowel frequency (less than three times per week) and normal colonic transit time. Any differences in demographics, symptoms, and parameters of anorectal physiologic tests were sought between responders (in whom bowel frequency increased up to three times or more per week after biofeedback) and nonresponders (in whom bowel frequency remained less than three times per week). Thirty-one patients (68.9 percent) responded to biofeedback and 14 patients (31.1 percent) did not. Anal canal length was longer in nonresponders than in responders (4.53 +/- 0.5 vs. 4.08 +/- 0.56 cm; P = 0.02), and rectal maximum tolerable volume was larger in nonresponders than in responders. (361 +/- 87 vs. 302 +/- 69 ml; P = 0.02). Anal canal length and rectal maximum tolerable volume showed significant differences between responders and nonresponders on multivariate analysis (P = 0.027 and P = 0.034, respectively). This study showed that a long anal canal and increased rectal maximum tolerable volume are associated with poor short-term response to biofeedback for patients with anismus with decreased bowel frequency and normal colonic transit time.

  6. Frequency-Modulated Continuous Flow Analysis Electrospray Ionization Mass Spectrometry (FM-CFA-ESI-MS) for Sample Multiplexing.

    Science.gov (United States)

    Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G

    2018-02-20

    A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.

  7. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  8. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  9. Suicide in bipolar disorder in a national English sample, 1996-2009: frequency, trends and characteristics.

    Science.gov (United States)

    Clements, C; Morriss, R; Jones, S; Peters, S; Roberts, C; Kapur, N

    2013-12-01

    Bipolar disorder (BD) has been reported to be associated with high risk of suicide. We aimed to investigate the frequency and characteristics of suicide in people with BD in a national sample. Suicide in BD in England from 1996 to 2009 was explored using descriptive statistics on data collected by the National Confidential Inquiry into Suicide and Homicide by People with Mental Illness (NCI). Suicide cases with a primary diagnosis of BD were compared to suicide cases with any other primary diagnosis. During the study period 1489 individuals with BD died by suicide, an average of 116 cases/year. Compared to other primary diagnosis suicides, those with BD were more likely to be female, more than 5 years post-diagnosis, current/recent in-patients, to have more than five in-patient admissions, and to have depressive symptoms. In BD suicides the most common co-morbid diagnoses were personality disorder and alcohol dependence. Approximately 40% were not prescribed mood stabilizers at the time of death. More than 60% of BD suicides were in contact with services the week prior to suicide but were assessed as low risk. Given the high rate of suicide in BD and the low estimates of risk, it is important that health professionals can accurately identify patients most likely to experience poor outcomes. Factors such as alcohol dependence/misuse, personality disorder, depressive illness and current/recent in-patient admission could characterize a high-risk group. Future studies need to operationalize clinically useful indicators of suicide risk in BD.

  10. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...

  11. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    Energy Technology Data Exchange (ETDEWEB)

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P. [ITB, Faculty of Earth Sciences and Tecnology (Indonesia); BMKG (Indonesia)

    2012-06-20

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  12. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    Science.gov (United States)

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-06-01

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance (Δ) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log Δ + 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  13. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    International Nuclear Information System (INIS)

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-01-01

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance (Δ) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log Δ+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  14. Flaw-size measurement in a weld samples by ultrasonic frequency analysis

    International Nuclear Information System (INIS)

    Adler, L.; Cook, K.V.; Whaley, H.L. Jr.; McClung, R.W.

    1975-01-01

    An ultrasonic frequency-analysis technique was developed and applies to characterize flaws in an 8-in. (203-mm) thick heavy-section steel weld specimen. The technique applies a multitransducer system. The spectrum of the received broad-band signal is frequency analyzed at two different receivers for each of the flaws. From the two spectra, the size and orientation of the flaw are determined by the use of an analytic model proposed earlier. (auth)

  15. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    International Nuclear Information System (INIS)

    Scogin, J. H.

    2016-01-01

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  16. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    Energy Technology Data Exchange (ETDEWEB)

    Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-24

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  17. Effects of diurnal emission patterns and sampling frequency on precision of measurement methods for daily ammonia emissions from animal houses

    NARCIS (Netherlands)

    Estelles, F.; Calvet, S.; Ogink, N.W.M.

    2010-01-01

    Ammonia concentrations and airflow rates are the main parameters needed to determine ammonia emissions from animal houses. It is possible to classify their measurement methods into two main groups according to the sampling frequency: semi-continuous and daily average measurements. In the first

  18. The effect of sampling frequency on the accuracy of estimates of milk ...

    African Journals Online (AJOL)

    The results of this study support the five-weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional bulking of individual morning and evening samples with a single evening milk sample would not compromise accuracy provided ...

  19. Partner wealth predicts self-reported orgasm frequency in a sample of Chinese women

    NARCIS (Netherlands)

    Pollet, T.V.; Nettle, D.

    There has been considerable speculation about the adaptive significance of the human female orgasm, with one hypothesis being that it promotes differential affiliation or conception with high-quality males. We investigated the relationship between women's self-reported orgasm frequency and the

  20. Etching of Niobium Sample Placed on Superconducting Radio Frequency Cavity Surface in Ar/CL2 Plasma

    International Nuclear Information System (INIS)

    Upadhyay, Janardan; Phillips, Larry; Valente, Anne-Marie

    2011-01-01

    Plasma based surface modification is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. It has been proven with flat samples that the bulk Niobium (Nb) removal rate and the surface roughness after the plasma etchings are equal to or better than wet etching processes. To optimize the plasma parameters, we are using a single cell cavity with 20 sample holders symmetrically distributed over the cell. These holders serve the purpose of diagnostic ports for the measurement of the plasma parameters and for the holding of the Nb sample to be etched. The plasma properties at RF (100 MHz) and MW (2.45 GHz) frequencies are being measured with the help of electrical and optical probes at different pressures and RF power levels inside of this cavity. The niobium coupons placed on several holders around the cell are being etched simultaneously. The etching results will be presented at this conference.

  1. Etching of Niobium Sample Placed on Superconducting Radio Frequency Cavity Surface in Ar/CL2 Plasma

    Energy Technology Data Exchange (ETDEWEB)

    Janardan Upadhyay, Larry Phillips, Anne-Marie Valente

    2011-09-01

    Plasma based surface modification is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. It has been proven with flat samples that the bulk Niobium (Nb) removal rate and the surface roughness after the plasma etchings are equal to or better than wet etching processes. To optimize the plasma parameters, we are using a single cell cavity with 20 sample holders symmetrically distributed over the cell. These holders serve the purpose of diagnostic ports for the measurement of the plasma parameters and for the holding of the Nb sample to be etched. The plasma properties at RF (100 MHz) and MW (2.45 GHz) frequencies are being measured with the help of electrical and optical probes at different pressures and RF power levels inside of this cavity. The niobium coupons placed on several holders around the cell are being etched simultaneously. The etching results will be presented at this conference.

  2. Influence of sampling frequency and load calculation methods on quantification of annual river nutrient and suspended solids loads.

    Science.gov (United States)

    Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey

    2018-01-11

    Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise

  3. Signature of a possible relationship between the maximum CME speed index and the critical frequencies of the F1 and F2 ionospheric layers: Data analysis for a mid-latitude ionospheric station during the solar cycles 23 and 24

    Science.gov (United States)

    Kilcik, Ali; Ozguc, Atila; Yiǧit, Erdal; Yurchyshyn, Vasyl; Donmez, Burcin

    2018-06-01

    We analyze temporal variations of two solar indices, the monthly mean Maximum CME Speed Index (MCMESI) and the International Sunspot Number (ISSN) as well as the monthly median ionospheric critical frequencies (foF1, and foF2) for the time period of 1996-2013, which covers the entire solar cycle 23 and the ascending branch of the cycle 24. We found that the maximum of foF1 and foF2 occurred respectively during the first and second maximum of the ISSN solar activity index in the solar cycle 23. We compared these data sets by using the cross-correlation and hysteresis analysis and found that both foF1 and foF2 show higher correlation with ISSN than the MCMESI during the investigated time period, but when significance levels are considered correlation coefficients between the same indices become comparable. Cross-correlation analysis showed that the agreement between these data sets (solar indices and ionospheric critical frequencies) is better pronounced during the ascending phases of solar cycles, while they display significant deviations during the descending phase. We conclude that there exists a signature of a possible relationship between MCMESI and foF1 and foF2, which means that MCMESI could be used as a possible indicator of solar and geomagnetic activity, even though other investigations are needed.

  4. Contribution to the study of maximum levels for liquid radioactive waste disposal into continental and sea water. Treatment of some typical samples

    International Nuclear Information System (INIS)

    Bittel, R.; Mancel, J.

    1968-10-01

    The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [fr

  5. Frequency, stability and differentiation of self-reported school fear and truancy in a community sample

    OpenAIRE

    Steinhausen, Hans-Christoph; Müller, Nora; Metzke, Christa Winkler

    2008-01-01

    Abstract Background Surprisingly little is known about the frequency, stability, and correlates of school fear and truancy based on self-reported data of adolescents. Methods Self-reported school fear and truancy were studied in a total of N = 834 subjects of the community-based Zurich Adolescent Psychology and Psychopathology Study (ZAPPS) at two times with an average age of thirteen and sixteen years. Group definitions were based on two behavioural items of the Youth Self-Report (YSR). Comp...

  6. Allele Frequency Data for 17 Short Tandem Repeats in a Czech Population Sample

    Czech Academy of Sciences Publication Activity Database

    Šimková, H.; Faltus, Václav; Marván, Richard; Pexa, T.; Stenzl, V.; Brouček, J.; Hořínek, A.; Mazura, Ivan; Zvárová, Jana

    2009-01-01

    Roč. 4, č. 1 (2009), e15-e17 ISSN 1872-4973 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : short tandem repeat (STR) * allelic frequency * PowerPlex 16 System * AmpflSTR Identifiler * population genetics * Czech Republic Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.421, year: 2009

  7. Measurement of flaw size in a weld sample by ultrasonic frequency analysis

    International Nuclear Information System (INIS)

    Whaley, H.L. Jr.; Adler, L.; Cook, K.V.; McClung, R.W.

    1975-05-01

    An ultrasonic frequency analysis technique has been developed and applied to the measurement of flaws in an 8-in.-thick heavy-section steel specimen belonging to the Pressure Vessel Research Committee program. Using the technique the flaws occurring in the weld area were characterized in quantitative terms of both dimension and orientation. Several modifications of the technique were made during the study to include the application of several transducers and to consider ultrasonic mode conversion. (U.S.)

  8. Enhancing the Frequency Adaptability of Periodic Current Controllers with a Fixed Sampling Rate for Grid-Connected Power Converters

    DEFF Research Database (Denmark)

    Yang, Yongheng; Zhou, Keliang; Blaabjerg, Frede

    2016-01-01

    Grid-connected power converters should employ advanced current controllers, e.g., Proportional Resonant (PR) and Repetitive Controllers (RC), in order to produce high-quality feed-in currents that are required to be synchronized with the grid. The synchronization is actually to detect...... of the resonant controllers and by approximating the fractional delay using a Lagrange interpolating polynomial for the RC, respectively, the frequency-variation-immunity of these periodic current controllers with a fixed sampling rate is improved. Experiments on a single-phase grid-connected system are presented...... the instantaneous grid information (e.g., frequency and phase of the grid voltage) for the current control, which is commonly performed by a Phase-Locked-Loop (PLL) system. Hence, harmonics and deviations in the estimated frequency by the PLL could lead to current tracking performance degradation, especially...

  9. Frequency of Aggressive Behaviors in a Nationally Representative Sample of Iranian Children and Adolescents: The CASPIAN-IV Study.

    Science.gov (United States)

    Sadinejad, Morteza; Bahreynian, Maryam; Motlagh, Mohammad-Esmaeil; Qorbani, Mostafa; Movahhed, Mohsen; Ardalan, Gelayol; Heshmat, Ramin; Kelishadi, Roya

    2015-01-01

    This study aims to explore the frequency of aggressive behaviors among a nationally representative sample of Iranian children and adolescents. This nationwide study was performed on a multi-stage sample of 6-18 years students, living in 30 provinces in Iran. Students were asked to confidentially report the frequency of aggressive behaviors including physical fighting, bullying and being bullied in the previous 12 months, using the questionnaire of the World Health Organization Global School Health Survey. In this cross-sectional study, 13,486 students completed the study (90.6% participation rate); they consisted of 49.2% girls and 75.6% urban residents. The mean age of participants was 12.47 years (95% confidence interval: 12.29, 12.65). In total, physical fight was more prevalent among boys than girls (48% vs. 31%, P bulling to other classmates had a higher frequency among boys compared to girls (29% vs. 25%, P bulling to others). Physical fighting was more prevalent among rural residents (40% vs. 39%, respectively, P = 0.61), while being bullied was more common among urban students (27% vs. 26%, respectively, P = 0.69). Although in this study the frequency of aggressive behaviors was lower than many other populations, still these findings emphasize on the importance of designing preventive interventions that target the students, especially in early adolescence, and to increase their awareness toward aggressive behaviors. Implications for future research and aggression prevention programming are recommended.

  10. Frequency of single nucleotide polymorphisms of some immune response genes in a population sample from São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Léa Campos de Oliveira

    2011-09-01

    Full Text Available Objective: To present the frequency of single nucleotide polymorphismsof a few immune response genes in a population sample from SãoPaulo City (SP, Brazil. Methods: Data on allele frequencies ofknown polymorphisms of innate and acquired immunity genes werepresented, the majority with proven impact on gene function. Datawere gathered from a sample of healthy individuals, non-HLA identicalsiblings of bone marrow transplant recipients from the Hospital dasClínicas da Faculdade de Medicina da Universidade de São Paulo,obtained between 1998 and 2005. The number of samples variedfor each single nucleotide polymorphism analyzed by polymerasechain reaction followed by restriction enzyme cleavage. Results:Allele and genotype distribution of 41 different gene polymorphisms,mostly cytokines, but also including other immune response genes,were presented. Conclusion: We believe that the data presentedhere can be of great value for case-control studies, to define whichpolymorphisms are present in biologically relevant frequencies and toassess targets for therapeutic intervention in polygenic diseases witha component of immune and inflammatory responses.

  11. Surface Characterization of Nb Samples Electro-polished Together With Real Superconducting Radio-frequency Accelerator Cavities

    International Nuclear Information System (INIS)

    Zhao, Xin; Geng, Rong-Li; Tyagi, P.V.; Hayano, Hitoshi; Kato, Shigeki; Nishiwaki, Michiru; Saeki, Takayuki; Sawabe, Motoaki

    2010-01-01

    We report the results of surface characterizations of niobium (Nb) samples electropolished together with a single cell superconducting radio-frequency accelerator cavity. These witness samples were located in three regions of the cavity, namely at the equator, the iris and the beam-pipe. Auger electron spectroscopy (AES) was utilized to probe the chemical composition of the topmost four atomic layers. Scanning electron microscopy with energy dispersive X-ray for elemental analysis (SEM/EDX) was used to observe the surface topography and chemical composition at the micrometer scale. A few atomic layers of sulfur (S) were found covering the samples non-uniformly. Niobium oxide granules with a sharp geometry were observed on every sample. Some Nb-O granules appeared to also contain sulfur.

  12. The effect of sampling frequency on the accuracy of estimates of milk ...

    African Journals Online (AJOL)

    Unknown

    1ARC-Animal Improvement Institute, Private Bag X5013, Stellenbosch 7599, South Africa; 2Department of Animal. Science, University of Stellenbosch, Stellenbosch, ... weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional ...

  13. Frequency of Haemophilus spp. in urinary and and genital tract samples

    Directory of Open Access Journals (Sweden)

    Tatjana Marijan,

    2010-02-01

    Full Text Available Aim To determine the prevalence and antibiotic susceptibility of Haemophilus influenzae and H. parainfluenzae isolated from the urinary and genital tracts. Methods Identification of strains bacteria Haemophilus spp. was carried out by using API NH identifi-cation system, and antibiotic susceptibility was performed by Kirby-Bauer disk diffusion method. Results A total number of 50 (0,03% H. influenzae and 14 (0,01% H. parainfluenzae (out of 180, 415 samples were isolated from genitourinary tract. From urine samples of the girls under 15 years of age these bacteria were isolated in 13 (0,88% and two (0,13% cases, respectively, and only in one case(0,11% of the UTI in boys (H. influenzae. In persons of fertile age, it was only H. influenzae bacteria that was found in urine samples of the five women (0,04% and in three men (0,22%. As a cause of vulvovaginitis, H. influenzae was isolated in four (5,63%, and H. parainfluenzae in two (2,82% girls. In persons of fertile age, H. influenzae was isolated from 10 (0,49% smears of the cervix, and in nine (1,74% male samples. H. parainfluenzae was isolated from seven (1,36% male samples. (p<0.01. Susceptibility testing of H. influenzae and H. parainfluenzae revealed that both pathogens were signifi- cantly resistant to cotrimoxasol only (26.0% and 42.9%, respectively. Conclusion In the etiology of genitourinary infections of girls during childhood, genital infections of women in fertile age (especially in pregnant women, and men with cases of epididimytis and/or orchitis,it is important to think about this rare and demanding bacteria in terms of cultivation.

  14. Frequency of isolation of Campylobacter from roasted chicken samples from Mexico City.

    Science.gov (United States)

    Quiñones-Ramírez, E I; Vázquez-Salinas, C; Rodas-Suárez, O R; Ramos-Flores, M O; Rodríguez-Montaño, R

    2000-01-01

    The presence of Campylobacter spp. was investigated in 100 samples of roasted chicken tacos sold in well-established commercial outlets and semisettled street stands in Mexico City. From 600 colonies displaying Campylobacter morphology only 123 isolates were positive. From these isolates, 51 (41%) were identified as C. jejuni, 23 (19%) as C. coli, and 49 (40%) as other species of this genus. All of the 27 positive samples came from one location where handling practices allowed cross-contamination of the cooked product. The results indicate that these ready-to-consume products are contaminated with these bacteria, representing a potential risk for consumers, especially in establishments lacking adequate sanitary measures to prevent cross-contamination.

  15. Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Varneskov, Rasmus T.

    of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual...... empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory....... Finally, we introduce LDWB-aided Kolmogorov-Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local...

  16. The frequency of sexual dysfunctions in male partners of women with vaginismus in a Turkish sample.

    Science.gov (United States)

    Dogan, S; Dogan, M

    2008-01-01

    The aim of this investigation is to determine the sexual history traits, sexual satisfaction level and frequency of sexual dysfunctions in men whose partners have vaginismus. The study included 32 male partners of vaginismic patients, who presented at a psychiatry department. Subjects were evaluated by a semi-structured questionnaire. The questionnaire was developed by researchers for assessing sexually dysfunctional patients and included detailed questions with regard to socio-demographic variables, general medical and sexual history. All participants also received the Golombok Rust Inventory of Sexual Satisfaction (GRISS). According to DSM-IV-TR criteria, 65.6% of the investigated males were diagnosed with one or more sexual dysfunctions. The most common problem was premature ejaculation (50%) and the second one was erectile dysfunction (28%). The transformed GRISS subscale scores provided similar data. It is concluded that the assessment of sexual functions of males who have vaginismic partners should be an integral part of the management procedure of vaginismus for optimal outcome.

  17. Frequency of hepatitis E and Hepatitis A virus in water sample collected from Faisalabad, Pakistan

    Directory of Open Access Journals (Sweden)

    Tahir Ahmad

    2015-12-01

    Full Text Available Hepatitis E and Hepatitis A virus both are highly prevalent in Pakistan mainly present as a sporadic disease. The aim of the current study is to isolate and characterized the specific genotype of Hepatitis E virus from water bodies of Faisalabad, Pakistan. Drinking and sewage samples were qualitatively analyzed by using RT-PCR. HEV Genotype 1 strain was recovered from sewage water of Faisalabad. Prevalence of HEV and HAV in sewage water propose the possibility of gradual decline in the protection level of the circulated vaccine in the Pakistani population.

  18. Evaluation of the Problem Behavior Frequency Scale-Teacher Report Form for Assessing Behavior in a Sample of Urban Adolescents.

    Science.gov (United States)

    Farrell, Albert D; Goncy, Elizabeth A; Sullivan, Terri N; Thompson, Erin L

    2018-02-01

    This study evaluated the structure and validity of the Problem Behavior Frequency Scale-Teacher Report Form (PBFS-TR) for assessing students' frequency of specific forms of aggression and victimization, and positive behavior. Analyses were conducted on two waves of data from 727 students from two urban middle schools (Sample 1) who were rated by their teachers on the PBFS-TR and the Social Skills Improvement System (SSIS), and on data collected from 1,740 students from three urban middle schools (Sample 2) for whom data on both the teacher and student report version of the PBFS were obtained. Confirmatory factor analyses supported first-order factors representing 3 forms of aggression (physical, verbal, and relational), 3 forms of victimization (physical, verbal and relational), and 2 forms of positive behavior (prosocial behavior and effective nonviolent behavior), and higher-order factors representing aggression, victimization, and positive behavior. Strong measurement invariance was established over gender, grade, intervention condition, and time. Support for convergent validity was found based on correlations between corresponding scales on the PBFS-TR and teacher ratings on the SSIS in Sample 1. Significant correlations were also found between teacher ratings on the PBFS-TR and student ratings of their behavior on the Problem Behavior Frequency Scale-Adolescent Report (PBFS-AR) and a measure of nonviolent behavioral intentions in Sample 2. Overall the findings provided support for the PBFS-TR and suggested that teachers can provide useful data on students' aggressive and prosocial behavior and victimization experiences within the school setting. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Frequency of aggressive behaviors in a nationally representative sample of Iranian children and adolescents: The CASPIAN-IV study

    Directory of Open Access Journals (Sweden)

    Morteza Sadinejad

    2015-01-01

    Full Text Available Background: This study aims to explore the frequency of aggressive behaviors among a nationally representative sample of Iranian children and adolescents. Methods: This nationwide study was performed on a multi-stage sample of 6-18 years students, living in 30 provinces in Iran. Students were asked to confidentially report the frequency of aggressive behaviors including physical fighting, bullying and being bullied in the previous 12 months, using the questionnaire of the World Health Organization Global School Health Survey. Results: In this cross-sectional study, 13,486 students completed the study (90.6% participation rate; they consisted of 49.2% girls and 75.6% urban residents. The mean age of participants was 12.47 years (95% confidence interval: 12.29, 12.65. In total, physical fight was more prevalent among boys than girls (48% vs. 31%, P < 0.001. Higher rates of involvement in two other behaviors namely being bullied and bulling to other classmates had a higher frequency among boys compared to girls (29% vs. 25%, P < 0.001 for being bullied and (20% vs. 14%, P < 0.001 for bulling to others. Physical fighting was more prevalent among rural residents (40% vs. 39%, respectively, P = 0.61, while being bullied was more common among urban students (27% vs. 26%, respectively, P = 0.69. Conclusions: Although in this study the frequency of aggressive behaviors was lower than many other populations, still these findings emphasize on the importance of designing preventive interventions that target the students, especially in early adolescence, and to increase their awareness toward aggressive behaviors. Implications for future research and aggression prevention programming are recommended.

  20. Identification of hydrologic and geochemical pathways using high frequency sampling, REE aqueous sampling and soil characterization at Koiliaris Critical Zone Observatory, Crete

    Energy Technology Data Exchange (ETDEWEB)

    Moraetis, Daniel, E-mail: moraetis@mred.tuc.gr [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece); Stamati, Fotini; Kotronakis, Manolis; Fragia, Tasoula; Paranychnianakis, Nikolaos; Nikolaidis, Nikolaos P. [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece)

    2011-06-15

    Highlights: > Identification of hydrological and geochemical pathways within a complex watershed. > Water increased N-NO{sub 3} concentration and E.C. values during flash flood events. > Soil degradation and impact on water infiltration within the Koiliaris watershed. > Analysis of Rare Earth Elements in water bodies for identification of karstic water. - Abstract: Koiliaris River watershed is a Critical Zone Observatory that represents severely degraded soils due to intensive agricultural activities and biophysical factors. It has typical Mediterranean soils under the imminent threat of desertification which is expected to intensify due to projected climate change. High frequency hydro-chemical monitoring with targeted sampling for Rare Earth Elements (REE) analysis of different water bodies and geochemical characterization of soils were used for the identification of hydrologic and geochemical pathways. The high frequency monitoring of water chemical data highlighted the chemical alterations of water in Koiliaris River during flash flood events. Soil physical and chemical characterization surveys were used to identify erodibility patterns within the watershed and the influence of soils on surface and ground water chemistry. The methodology presented can be used to identify the impacts of degraded soils to surface and ground water quality as well as in the design of methods to minimize the impacts of land use practices.

  1. Implication of the first decision on visual information-sampling in the spatial frequency domain in pulmonary nodule recognition

    Science.gov (United States)

    Pietrzyk, Mariusz W.; Manning, David; Donovan, Tim; Dix, Alan

    2010-02-01

    Aim: To investigate the impact on visual sampling strategy and pulmonary nodule recognition of image-based properties of background locations in dwelled regions where the first overt decision was made. . Background: Recent studies in mammography show that the first overt decision (TP or FP) has an influence on further image reading including the correctness of the following decisions. Furthermore, the correlation between the spatial frequency properties of the local background following decision sites and the first decision correctness has been reported. Methods: Subjects with different radiological experience were eye tracked during detection of pulmonary nodules from PA chest radiographs. Number of outcomes and the overall quality of performance are analysed in terms of the cases where correct or incorrect decisions were made. JAFROC methodology is applied. The spatial frequency properties of selected local backgrounds related to a certain decisions were studied. ANOVA was used to compare the logarithmic values of energy carried by non redundant stationary wavelet packet coefficients. Results: A strong correlation has been found between the number of TP as a first decision and the JAFROC score (r = 0.74). The number of FP as a first decision was found negatively correlated with JAFROC (r = -0.75). Moreover, the differential spatial frequency profiles outcomes depend on the first choice correctness.

  2. Application of CRAFT (complete reduction to amplitude frequency table) in nonuniformly sampled (NUS) 2D NMR data processing.

    Science.gov (United States)

    Krishnamurthy, Krish; Hari, Natarajan

    2017-09-15

    The recently published CRAFT (complete reduction to amplitude frequency table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. It offers an alternate approach to decimate time-domain data, with minimal preprocessing step. It has been shown that application of CRAFT technique to process the t 1 dimension of the 2D data significantly improved the detectable resolution by its ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t 1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t 1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t 1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Differences in Orgasm Frequency Among Gay, Lesbian, Bisexual, and Heterosexual Men and Women in a U.S. National Sample.

    Science.gov (United States)

    Frederick, David A; John, H Kate St; Garcia, Justin R; Lloyd, Elisabeth A

    2018-01-01

    There is a notable gap between heterosexual men and women in frequency of orgasm during sex. Little is known, however, about sexual orientation differences in orgasm frequency. We examined how over 30 different traits or behaviors were associated with frequency of orgasm when sexually intimate during the past month. We analyzed a large US sample of adults (N = 52,588) who identified as heterosexual men (n = 26,032), gay men (n = 452), bisexual men (n = 550), lesbian women (n = 340), bisexual women (n = 1112), and heterosexual women (n = 24,102). Heterosexual men were most likely to say they usually-always orgasmed when sexually intimate (95%), followed by gay men (89%), bisexual men (88%), lesbian women (86%), bisexual women (66%), and heterosexual women (65%). Compared to women who orgasmed less frequently, women who orgasmed more frequently were more likely to: receive more oral sex, have longer duration of last sex, be more satisfied with their relationship, ask for what they want in bed, praise their partner for something they did in bed, call/email to tease about doing something sexual, wear sexy lingerie, try new sexual positions, anal stimulation, act out fantasies, incorporate sexy talk, and express love during sex. Women were more likely to orgasm if their last sexual encounter included deep kissing, manual genital stimulation, and/or oral sex in addition to vaginal intercourse. We consider sociocultural and evolutionary explanations for these orgasm gaps. The results suggest a variety of behaviors couples can try to increase orgasm frequency.

  4. Evolution of concentration-discharge relations revealed by high frequency diurnal sampling of stream water during spring snowmelt

    Science.gov (United States)

    Olshansky, Y.; White, A. M.; Thompson, M.; Moravec, B. G.; McIntosh, J. C.; Chorover, J.

    2017-12-01

    Concentration discharge (C-Q) relations contain potentially important information on critical zone (CZ) processes including: weathering reactions, water flow paths and nutrient export. To examine the C-Q relations in a small (3.3 km2) headwater catchment - La Jara Creek located in the Jemez River Basin Critical Zone Observatory, daily, diurnal stream water samples were collected during spring snow melt 2017, from two flumes located in outlets of the La Jara Creek and a high elevation zero order basin within this catchment. Previous studies from this site (McIntosh et al., 2017) suggested that high frequency sampling was needed to improve our interpretation of C-Q relations. The dense sampling covered two ascending and two descending limbs of the snowmelt hydrograph, from March 1 to May 15, 2017. While Na showed inverse correlation (dilution) with discharge, most other solutes (K, Mg, Fe, Al, dissolved organic carbon) exhibited positive (concentration) or chemostatic trends (Ca, Mn, Si, dissolved inorganic carbon and dissolved nitrogen). Hysteresis in the C-Q relation was most pronounced for bio-cycled cations (K, Mg) and for Fe, which exhibited concentration during the first ascending limb followed by a chemostatic trend. A pulsed increase in Si concentration immediately after the first ascending limb in both flumes suggests mixing of deep groundwater with surface water. A continual increase in Ge/Si concentrations followed by a rapid decrease after the second rising limb may suggest a fast transition between soil water to ground water dominating the stream flow. Fourier transform infrared spectroscopy of selected samples across the hydrograph demonstrated pronounced changes in dissolved organic matter molecular composition with the advancement of the spring snow melt. X-ray micro-spectroscopy of colloidal material isolated from the collected water samples indicated a significant role for organic matter in the transport of inorganic colloids. Analyses of high

  5. Method of estimating maximum VOC concentration in void volume of vented waste drums using limited sampling data: Application in transuranic waste drums

    International Nuclear Information System (INIS)

    Liekhus, K.J.; Connolly, M.J.

    1995-01-01

    A test program has been conducted at the Idaho National Engineering Laboratory to demonstrate that the concentration of volatile organic compounds (VOCs) within the innermost layer of confinement in a vented waste drum can be estimated using a model incorporating diffusion and permeation transport principles as well as limited waste drum sampling data. The model consists of a series of material balance equations describing steady-state VOC transport from each distinct void volume in the drum. The primary model input is the measured drum headspace VOC concentration. Model parameters are determined or estimated based on available process knowledge. The model effectiveness in estimating VOC concentration in the headspace of the innermost layer of confinement was examined for vented waste drums containing different waste types and configurations. This paper summarizes the experimental measurements and model predictions in vented transuranic waste drums containing solidified sludges and solid waste

  6. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  7. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  8. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  9. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  10. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  11. Executive control resources and frequency of fatty food consumption: findings from an age-stratified community sample.

    Science.gov (United States)

    Hall, Peter A

    2012-03-01

    Fatty foods are regarded as highly appetitive, and self-control is often required to resist consumption. Executive control resources (ECRs) are potentially facilitative of self-control efforts, and therefore could predict success in the domain of dietary self-restraint. It is not currently known whether stronger ECRs facilitate resistance to fatty food consumption, and moreover, it is unknown whether such an effect would be stronger in some age groups than others. The purpose of the present study was to examine the association between ECRs and consumption of fatty foods among healthy community-dwelling adults across the adult life span. An age-stratified sample of individuals between 18 and 89 years of age attended two laboratory sessions. During the first session they completed two computer-administered tests of ECRs (Stroop and Go-NoGo) and a test of general cognitive function (Wechsler Abbreviated Scale of Intelligence); participants completed two consecutive 1-week recall measures to assess frequency of fatty and nonfatty food consumption. Regression analyses revealed that stronger ECRs were associated with lower frequency of fatty food consumption over the 2-week interval. This association was observed for both measures of ECR and a composite measure. The effect remained significant after adjustment for demographic variables (age, gender, socioeconomic status), general cognitive function, and body mass index. The observed effect of ECRs on fatty food consumption frequency was invariant across age group, and did not generalize to nonfatty food consumption. ECRs may be potentially important, though understudied, determinants of dietary behavior in adults across the life span.

  12. Polymorphisms in the Innate Immune IFIH1 Gene, Frequency of Enterovirus in Monthly Fecal Samples during Infancy, and Islet Autoimmunity

    Science.gov (United States)

    Witsø, Elisabet; Tapia, German; Cinek, Ondrej; Pociot, Flemming Michael; Stene, Lars C.; Rønningen, Kjersti S.

    2011-01-01

    Interferon induced with helicase C domain 1 (IFIH1) senses and initiates antiviral activity against enteroviruses. Genetic variants of IFIH1, one common and four rare SNPs have been associated with lower risk for type 1 diabetes. Our aim was to test whether these type 1 diabetes-associated IFIH1 polymorphisms are associated with the occurrence of enterovirus infection in the gut of healthy children, or influence the lack of association between gut enterovirus infection and islet autoimmunity. After testing of 46,939 Norwegian newborns, 421 children carrying the high risk genotype for type 1 diabetes (HLA-DR4-DQ8/DR3-DQ2) as well as 375 children without this genotype were included for monthly fecal collections from 3 to 35 months of age, and genotyped for the IFIH1 polymorphisms. A total of 7,793 fecal samples were tested for presence of enterovirus RNA using real time reverse transcriptase PCR. We found no association with frequency of enterovirus in the gut for the common IFIH1 polymorphism rs1990760, or either of the rare variants of rs35744605, rs35667974, rs35337543, while the enterovirus prevalence marginally differed in samples from the 8 carriers of a rare allele of rs35732034 (26.1%, 18/69 samples) as compared to wild-type homozygotes (12.4%, 955/7724 samples); odds ratio 2.5, p = 0.06. The association was stronger when infections were restricted to those with high viral loads (odds ratio 3.3, 95% CI 1.3–8.4, p = 0.01). The lack of association between enterovirus frequency and islet autoimmunity reported in our previous study was not materially influenced by the IFIH1 SNPs. We conclude that the type 1 diabetes-associated IFIH1 polymorphisms have no, or only minor influence on the occurrence, quantity or duration of enterovirus infection in the gut. Its effect on the risk of diabetes is likely to lie elsewhere in the pathogenic process than in the modification of gut infection. PMID:22110759

  13. Polymorphism discovery and allele frequency estimation using high-throughput DNA sequencing of target-enriched pooled DNA samples

    Directory of Open Access Journals (Sweden)

    Mullen Michael P

    2012-01-01

    Full Text Available Abstract Background The central role of the somatotrophic axis in animal post-natal growth, development and fertility is well established. Therefore, the identification of genetic variants affecting quantitative traits within this axis is an attractive goal. However, large sample numbers are a pre-requisite for the identification of genetic variants underlying complex traits and although technologies are improving rapidly, high-throughput sequencing of large numbers of complete individual genomes remains prohibitively expensive. Therefore using a pooled DNA approach coupled with target enrichment and high-throughput sequencing, the aim of this study was to identify polymorphisms and estimate allele frequency differences across 83 candidate genes of the somatotrophic axis, in 150 Holstein-Friesian dairy bulls divided into two groups divergent for genetic merit for fertility. Results In total, 4,135 SNPs and 893 indels were identified during the resequencing of the 83 candidate genes. Nineteen percent (n = 952 of variants were located within 5' and 3' UTRs. Seventy-two percent (n = 3,612 were intronic and 9% (n = 464 were exonic, including 65 indels and 236 SNPs resulting in non-synonymous substitutions (NSS. Significant (P ® MassARRAY. No significant differences (P > 0.1 were observed between the two methods for any of the 43 SNPs across both pools (i.e., 86 tests in total. Conclusions The results of the current study support previous findings of the use of DNA sample pooling and high-throughput sequencing as a viable strategy for polymorphism discovery and allele frequency estimation. Using this approach we have characterised the genetic variation within genes of the somatotrophic axis and related pathways, central to mammalian post-natal growth and development and subsequent lactogenesis and fertility. We have identified a large number of variants segregating at significantly different frequencies between cattle groups divergent for calving

  14. Task 08/41, Low temperature loop at the RA reactor, Review IV - Maximum temperature values in the samples without forced cooling; Zadatak 08/41, Niskotemperaturna petlja u reaktoru 'RA', Pregled IV - Maksimalne temperature u uzorcima bez prinudnog hladjenja

    Energy Technology Data Exchange (ETDEWEB)

    Zaric, Z [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1961-12-15

    The quantity of heat generated in the sample was calculated in the Review III. In stationary regime the heat is transferred through the air layer between the sample and the wall of the channel to the heavy water of graphite. Certain value of maximum temperature t{sub 0} is achieved in the sample. The objective of this review is determination of this temperature. [Serbo-Croat] Kolicina toplote generisana u uzorku, izracunata u pregledu III, u ravnoteznom stanju odvodi se kroz vazdusni sloj izmedju uzorka i zida kanala na tesku vodu odnosno grafit, pri cemu se u uzorku dostize izvesna maksimalna temperatura t{sub 0}. Odredjivanje ove temperature je predmet ovog pregleda.

  15. High-frequency, long-duration water sampling in acid mine drainage studies: a short review of current methods and recent advances in automated water samplers

    Science.gov (United States)

    Chapin, Thomas

    2015-01-01

    Hand-collected grab samples are the most common water sampling method but using grab sampling to monitor temporally variable aquatic processes such as diel metal cycling or episodic events is rarely feasible or cost-effective. Currently available automated samplers are a proven, widely used technology and typically collect up to 24 samples during a deployment. However, these automated samplers are not well suited for long-term sampling in remote areas or in freezing conditions. There is a critical need for low-cost, long-duration, high-frequency water sampling technology to improve our understanding of the geochemical response to temporally variable processes. This review article will examine recent developments in automated water sampler technology and utilize selected field data from acid mine drainage studies to illustrate the utility of high-frequency, long-duration water sampling.

  16. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  17. D1S80 (pMCT118) allele frequencies in a Malay population sample from Malaysia.

    Science.gov (United States)

    Koh, C L; Lim, M E; Ng, H S; Sam, C K

    1997-01-01

    The D1S80 allele frequencies in 124 unrelated Malays from the Malaysian population were determined and 51 genotypes and 19 alleles were encountered. The D1S80 frequency distribution met Hardy-Weinberg expectations. The observed heterozygosity was 0.80 and the power of discrimination was 0.96.

  18. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  19. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    Science.gov (United States)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  20. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  1. Zoonotic species of the genus Arcobacter in poultry from different regions of Costa Rica: frequency of isolation and comparison of two types of sampling

    International Nuclear Information System (INIS)

    Valverde Bogantes, Esteban

    2014-01-01

    The presence of the zoonotic species of Arcobacter are evaluated in laying hens, broilers, ducks and geese of Costa Rica. The frequency of isolation of the genus Arcobacter is determined in samples of poultry using culture methods and molecular techniques. The performance of cloacal swab sampling and fecal collection is compared from poultry for isolation of Arcobacter. The isolation frequencies of Arcobacter species in poultry have indicated a potential public health problem in Costa Rica. Poultry are determined as sources of contamination and dispersion of the bacteria [es

  2. Comparison of allele frequencies of Plasmodium falciparum merozoite antigens in malaria infections sampled in different years in a Kenyan population.

    Science.gov (United States)

    Ochola-Oyier, Lynette Isabella; Okombo, John; Wagatua, Njoroge; Ochieng, Jacob; Tetteh, Kevin K; Fegan, Greg; Bejon, Philip; Marsh, Kevin

    2016-05-06

    Plasmodium falciparum merozoite antigens elicit antibody responses in malaria-endemic populations, some of which are clinically protective, which is one of the reasons why merozoite antigens are the focus of malaria vaccine development efforts. Polymorphisms in several merozoite antigen-encoding genes are thought to arise as a result of selection by the human immune system. The allele frequency distribution of 15 merozoite antigens over a two-year period, 2007 and 2008, was examined in parasites obtained from children with uncomplicated malaria. In the same population, allele frequency changes pre- and post-anti-malarial treatment were also examined. Any gene which showed a significant shift in allele frequencies was also assessed longitudinally in asymptomatic and complicated malaria infections. Fluctuating allele frequencies were identified in codons 147 and 148 of reticulocyte-binding homologue (Rh) 5, with a shift from HD to YH haplotypes over the two-year period in uncomplicated malaria infections. However, in both the asymptomatic and complicated malaria infections YH was the dominant and stable haplotype over the two-year and ten-year periods, respectively. A logistic regression analysis of all three malaria infection populations between 2007 and 2009 revealed, that the chance of being infected with the HD haplotype decreased with time from 2007 to 2009 and increased in the uncomplicated and asymptomatic infections. Rh5 codons 147 and 148 showed heterogeneity at both an individual and population level and may be under some degree of immune selection.

  3. Laser ablation: Laser parameters: Frequency, pulse length, power, and beam charter play significant roles with regard to sampling complex samples for ICP/MS analysis

    International Nuclear Information System (INIS)

    Smith, M.R.; Alexander, M.L.; Hartman, J.S.; Koppenaal, D.W.

    1996-01-01

    Inductively coupled plasma mass spectrometry is used to investigate the influence of laser parameters with regard to sampling complex matrices ranging from relatively homogenous glasses to multi-phase sludge/slurry materials including radioactive Hanford tank waste. The resulting plume composition caused by the pulsed laser is evaluated as a function of wavelength, pulse energy, pulse length, focus, and beam power profiles. The author's studies indicate that these parameters play varying and often synergistic roles regarding quantitative results. (In a companion paper, particle transport and size distribution studies are presented.) The work described here will illustrate other laser parameters such as focusing and consequently power density and beam power profiles which are shown to influence precision and accuracy. Representative sampling by the LA approach is largely dependent on the sample's optical properties as well as laser parameters. Experimental results indicate that optimal laser parameters; short wavelength (UV), relatively low power (300 mJ), low-to-sub ns pulse lengths, and laser beams with reasonable power distributions (i.e., Gaussian or top-hat beam profiles) provide superior precision and accuracy. Remote LA-ICP/MS analyses of radioactive sludges are used to illustrate these optimal conditions laser ablation sampling

  4. The importance of the sampling frequency in determining short-time-averaged irradiance and illuminance for rapidly changing cloud cover

    International Nuclear Information System (INIS)

    Delaunay, J.J.; Rommel, M.; Geisler, J.

    1994-01-01

    The sampling interval is an important parameter which must be chosen carefully, if measurements of the direct, global, and diffuse irradiance or illuminance are carried out to determine their averages over a given period. Using measurements from a day with rapidly moving clouds, we investigated the influence of the sampling interval on the uncertainly of the calculated 15-min averages. We conclude, for this averaging period, that the sampling interval should not exceed 60 s and 10 s for measurement of the diffuse and global components respectively, to reduce the influence of the sampling interval below 2%. For the direct component, even a 5 s sampling interval is too long to reach this influence level for days with extremely quickly changing insolation conditions. (author)

  5. Dual frequency modulation with two cantilevers in series: a possible means to rapidly acquire tip–sample interaction force curves with dynamic AFM

    International Nuclear Information System (INIS)

    Solares, Santiago D; Chawla, Gaurav

    2008-01-01

    One common application of atomic force microscopy (AFM) is the acquisition of tip–sample interaction force curves. However, this can be a slow process when the user is interested in studying non-uniform samples, because existing contact- and dynamic-mode methods require that the measurement be performed at one fixed surface point at a time. This paper proposes an AFM method based on dual frequency modulation using two cantilevers in series, which could be used to measure the tip–sample interaction force curves and topography of the entire sample with a single surface scan, in a time that is comparable to the time needed to collect a topographic image with current AFM imaging modes. Numerical simulation results are provided along with recommended parameters to characterize tip–sample interactions resembling those of conventional silicon tips and carbon nanotube tips tapping on silicon surfaces

  6. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  7. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  8. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  9. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  10. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  11. A study on reducing update frequency of the forecast samples in the ensemble-based 4DVar data assimilation method

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Aimei; Xu, Daosheng [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Chinese Academy of Meteorological Sciences, Beijing (China). State Key Lab. of Severe Weather; Qiu, Xiaobin [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Tianjin Institute of Meteorological Science (China); Qiu, Chongjian [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province

    2013-02-15

    In the ensemble-based four dimensional variational assimilation method (SVD-En4DVar), a singular value decomposition (SVD) technique is used to select the leading eigenvectors and the analysis variables are expressed as the orthogonal bases expansion of the eigenvectors. The experiments with a two-dimensional shallow-water equation model and simulated observations show that the truncation error and rejection of observed signals due to the reduced-dimensional reconstruction of the analysis variable are the major factors that damage the analysis when the ensemble size is not large enough. However, a larger-sized ensemble is daunting computational burden. Experiments with a shallow-water equation model also show that the forecast error covariances remain relatively constant over time. For that reason, we propose an approach that increases the members of the forecast ensemble while reducing the update frequency of the forecast error covariance in order to increase analysis accuracy and to reduce the computational cost. A series of experiments were conducted with the shallow-water equation model to test the efficiency of this approach. The experimental results indicate that this approach is promising. Further experiments with the WRF model show that this approach is also suitable for the real atmospheric data assimilation problem, but the update frequency of the forecast error covariances should not be too low. (orig.)

  12. Energy drink use frequency among an international sample of people who use drugs: Associations with other substance use and well-being

    OpenAIRE

    Peacock, Amy; Bruno, Raimondo; Ferris, Jason; Winstock, Adam

    2017-01-01

    Objective The study aims were to identify: i.) energy drink (ED), caffeine tablet, and caffeine intranasal spray use amongst a sample who report drug use, and ii.) the association between ED use frequency and demographic profile, drug use, hazardous drinking, and wellbeing. Method Participants (n = 74,864) who reported drug use completed the online 2014 Global Drug Survey. They provided data on demographics, ED use, and alcohol and drug use, completed the Alcohol Use Disorders Identification ...

  13. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  14. Gamma radiation effects on the frequency of toxigenic fungus on sene (Cassia angustifolia) and green tea (Camelia sinensis) samples

    International Nuclear Information System (INIS)

    Aquino, S.; Villavicencio, A.L.C.H.

    2006-01-01

    The levels of contamination and gamma radiation effects were analyzed in the reduction of toxigenic filamentous fungus in two types of medicinal plants. Aspergillus and Penicillium were the predominant genders and 73,80% of the samples showed high levels of fungus contamination

  15. The Trembling Earth Before Wenchuan Earthquake: Recognition of Precursory Anomalies through High Frequency Sampling Data of Groundwater

    Science.gov (United States)

    Huang, F.

    2017-12-01

    With a magnitude of MS8.0, the 2008 Wenchuan earthquake is classified as one of the "great earthquakes", which are potentially the most destructive, since it occurred at shallow depth close to a highly populated area without prediction, due to no confirmative precursors which were detected from a large amount of newly carried out digital observation data. Scientists who specilize in prediction routine work had been condemned and self-condemned for a long time then. After the pain of defeat passed, scientists have been some thinking to analyze the old observation data in new perspectives from longer temporal process, multiple-disciplinaries, and in different frequency. This presentation will show the preliminary results from groundwater level and temperature observed in 3 wells which distribute along the boundaries of tectonic blocks nearby and far from Wenchuan earthquake rupture.

  16.  Frequency of hepatitis E and Hepatitis A virus in water sample collected from Faisalabad, Pakistan.

    Science.gov (United States)

    Ahmad, Tahir; Anjum, Sadia; Sadaf Zaidi, Najam-us-Sahar; Ali, Amjad; Waqas, Muhammad; Afzal, Muhammad Sohail; Arshad, Najma

    2015-01-01

    Hepatitis E and Hepatitis A virus both are highly prevalent in Pakistan mainly present as a sporadic disease. The aim of the current study is to isolate and characterized the specific genotype of Hepatitis E virus from water bodies of Faisalabad, Pakistan. Drinking and sewage samples were qualitatively analyzed by using RT-PCR. HEV Genotype 1 strain was recovered from sewage water of Faisalabad. Prevalence of HEV and HAV in sewage water propose the possibility of gradual decline in the protection level of the circulated vaccine in the Pakistani population.

  17. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  18. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  19. Polymorphisms in the innate immune IFIH1 gene, frequency of enterovirus in monthly fecal samples during infancy, and islet autoimmunity

    DEFF Research Database (Denmark)

    Witsø, Elisabet; Tapia, German; Cinek, Ondrej

    2011-01-01

    Interferon induced with helicase C domain 1 (IFIH1) senses and initiates antiviral activity against enteroviruses. Genetic variants of IFIH1, one common and four rare SNPs have been associated with lower risk for type 1 diabetes. Our aim was to test whether these type 1 diabetes-associated IFIH1...... polymorphisms are associated with the occurrence of enterovirus infection in the gut of healthy children, or influence the lack of association between gut enterovirus infection and islet autoimmunity.After testing of 46,939 Norwegian newborns, 421 children carrying the high risk genotype for type 1 diabetes...... (HLA-DR4-DQ8/DR3-DQ2) as well as 375 children without this genotype were included for monthly fecal collections from 3 to 35 months of age, and genotyped for the IFIH1 polymorphisms. A total of 7,793 fecal samples were tested for presence of enterovirus RNA using real time reverse transcriptase PCR...

  20. Variation in the human lymphocyte sister chromatid exchange frequency as a function of time: results of daily and twice-weekly sampling

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, J.D.; Christensen, M.L.; Strout, C.L.; McGee, K.A.; Carrano, A.V.

    1987-01-01

    The variation in lymphocyte sister chromatid exchange (SCE) frequency was investigated in healthy nonsmokers who were not taking any medication. Two separate studies were undertaken. In the first, blood was drawn from four women twice a week for 8 weeks. These donors recorded the onset and termination of menstruation and times of illness. In the second study, blood was obtained from two women and two men for 5 consecutive days on two separate occasions initiated 14 days apart. Analysis of the mean SCE frequencies in each study indicated that significant temporal variation occurred in each donor, and that more variation occurred in the longer study. Some of the variation was found to be associated with the menstrual cycle. In the daily study, most of the variation appeared to be random, but occasional day-to-day changes occurred that were greater than those expected by chance. To determine how well a single SCE sample estimated the pooled mean for each donor in each study, the authors calculated the number of samples that encompassed that donor's pooled mean within 1 or more standard errors. For both studies, about 75% of the samples encompassed the pooled mean within 2 standard errors. An analysis of high-frequency cells (HFCs) was also undertaken. The results for each study indicate that the proportion of HFCs, compared with the use of Fisher's Exact test, is significantly more constant than the means, which were compared by using the t-test. These results coupled with our previous work suggest that HFC analysis may be the method of choice when analyzing data from human population studies.

  1. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  2. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  3. Frequency of Cannabis Use and Medical Cannabis Use Among Persons Living With HIV in the United States: Findings From a Nationally Representative Sample.

    Science.gov (United States)

    Pacek, Lauren R; Towe, Sheri L; Hobkirk, Andrea L; Nash, Denis; Goodwin, Renee D

    2018-04-01

    Little is known about cannabis use frequency, medical cannabis use, or correlates of use among persons living with HIV (PLWH) in United States nationally representative samples. Data came from 626 PLWH from the 2005-2015 National Survey on Drug Use and Health. Logistic regression identified characteristics associated with frequency of cannabis use. Chi-squares identified characteristics associated with medial cannabis use. Non-daily and daily cannabis use was reported by 26.9% and 8.0%. Greater perceived risk of cannabis use was negatively associated with daily and non-daily use. Younger age, substance use, and binge drinking were positively associated with non-daily cannabis use. Smoking and depression were associated with non-daily and daily use. One-quarter reported medical cannabis use. Medical users were more likely to be White, married, and nondrinkers. Cannabis use was common among PLWH. Findings help to differentiate between cannabis users based on frequency of use and medical versus recreational use.

  4. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  5. Mental health problems are associated with low-frequency fluctuations in reaction time in a large general population sample. The TRAILS study.

    Science.gov (United States)

    Bastiaansen, J A; van Roon, A M; Buitelaar, J K; Oldehinkel, A J

    2015-02-01

    Increased intra-subject reaction time variability (RT-ISV) as coarsely measured by the standard deviation (RT-SD) has been associated with many forms of psychopathology. Low-frequency RT fluctuations, which have been associated with intrinsic brain rhythms occurring approximately every 15-40s, have been shown to add unique information for ADHD. In this study, we investigated whether these fluctuations also relate to attentional problems in the general population, and contribute to the two major domains of psychopathology: externalizing and internalizing problems. RT was monitored throughout a self-paced sustained attention task (duration: 9.1 ± 1.2 min) in a Dutch population cohort of young adults (n=1455, mean age: 19.0 ± 0.6 years, 55.1% girls). To characterize temporal fluctuations in RT, we performed direct Fourier Transform on externally validated frequency bands based on frequency ranges of neuronal oscillations: Slow-5 (0.010-0.027 Hz), Slow-4 (0.027-0.073 Hz), and three additional higher frequency bands. Relative magnitude of Slow-4 fluctuations was the primary predictor in regression models for attentional, internalizing and externalizing problems (measured by the Adult Self-Report questionnaire). Additionally, stepwise regression models were created to investigate (a) whether Slow-4 significantly improved the prediction of problem behaviors beyond the RT-SD and (b) whether the other frequency bands provided important additional information. The magnitude of Slow-4 fluctuations significantly predicted attentional and externalizing problems and even improved model fit after modeling RT-SD first (R(2) change=0.6%, Pfrequency bands provided additional information. Low-frequency RT fluctuations have added predictive value for attentional and externalizing, but not internalizing problems beyond global differences in variability. This study extends previous findings in clinical samples of children with ADHD to adolescents from the general population and

  6. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  7. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  8. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  9. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  10. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  11. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  12. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  13. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  14. Dependence of B1+ and B1- Field Patterns of Surface Coils on the Electrical Properties of the Sample and the MR Operating Frequency.

    Science.gov (United States)

    Vaidya, Manushka V; Collins, Christopher M; Sodickson, Daniel K; Brown, Ryan; Wiggins, Graham C; Lattanzi, Riccardo

    2016-02-01

    In high field MRI, the spatial distribution of the radiofrequency magnetic ( B 1 ) field is usually affected by the presence of the sample. For hardware design and to aid interpretation of experimental results, it is important both to anticipate and to accurately simulate the behavior of these fields. Fields generated by a radiofrequency surface coil were simulated using dyadic Green's functions, or experimentally measured over a range of frequencies inside an object whose electrical properties were varied to illustrate a variety of transmit [Formula: see text] and receive [Formula: see text] field patterns. In this work, we examine how changes in polarization of the field and interference of propagating waves in an object can affect the B 1 spatial distribution. Results are explained conceptually using Maxwell's equations and intuitive illustrations. We demonstrate that the electrical conductivity alters the spatial distribution of distinct polarized components of the field, causing "twisted" transmit and receive field patterns, and asymmetries between [Formula: see text] and [Formula: see text]. Additionally, interference patterns due to wavelength effects are observed at high field in samples with high relative permittivity and near-zero conductivity, but are not present in lossy samples due to the attenuation of propagating EM fields. This work provides a conceptual framework for understanding B 1 spatial distributions for surface coils and can provide guidance for RF engineers.

  15. Energy drink use frequency among an international sample of people who use drugs: Associations with other substance use and well-being.

    Science.gov (United States)

    Peacock, Amy; Bruno, Raimondo; Ferris, Jason; Winstock, Adam

    2017-05-01

    The study aims were to identify: i.) energy drink (ED), caffeine tablet, and caffeine intranasal spray use amongst a sample who report drug use, and ii.) the association between ED use frequency and demographic profile, drug use, hazardous drinking, and wellbeing. Participants (n=74,864) who reported drug use completed the online 2014 Global Drug Survey. They provided data on demographics, ED use, and alcohol and drug use, completed the Alcohol Use Disorders Identification Test (AUDIT) and Personal Wellbeing Index (PWI), and reported whether they wished to reduce alcohol use. Lifetime ED, caffeine tablet and intranasal caffeine spray use were reported by 69.2%, 24.5% and 4.9%. Median age of ED initiation was 16 years. For those aged 16-37, median years using EDs increased from 4 to 17 years of consumption, where it declined thereafter. Greater ED use frequency was associated with: being male; under 21 years of age; studying; and past year caffeine tablet/intranasal spray, tobacco, cannabis, amphetamine, MDMA, and cocaine use. Past year, infrequent (1-4days) and frequent (≥5days) past month ED consumers reported higher AUDIT scores and lower PWI scores than lifetime abstainers; past month consumers were less likely to report a desire to reduce alcohol use. ED use is part of a complex interplay of drug use, alcohol problems, and poorer personal wellbeing, and ED use frequency may be a flag for current/future problems. Prospective research is required exploring where ED use fits within the trajectory of other alcohol and drug use. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering.

    Science.gov (United States)

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-05-31

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  17. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering

    Directory of Open Access Journals (Sweden)

    Chong Shen

    2016-05-01

    Full Text Available The different noise components in a dual-mass micro-electromechanical system (MEMS gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN, electronic-thermal noise (ETN, flicker noise (FN and Coriolis signal in-phase noise (IPN. The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD and time-frequency peak filtering (TFPF. There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  18. Adjustment and Assessment of the Measurements of Low and High Sampling Frequencies of GPS Real-Time Monitoring of Structural Movement

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2016-11-01

    Full Text Available Global Positioning System (GPS structural health monitoring data collection is one of the important systems in structure movement monitoring. However, GPS measurement error and noise limit the application of such systems. Many attempts have been made to adjust GPS measurements and eliminate their errors. Comparing common nonlinear methods used in the adjustment of GPS positioning for the monitoring of structures is the main objective of this study. Nonlinear Adaptive-Recursive Least Square (RLS, extended Kalman filter (EKF, and wavelet principal component analysis (WPCA are presented and applied to improve the quality of GPS time series observations. Two real monitoring observation systems for the Mansoura railway and long-span Yonghe bridges are utilized to examine suitable methods used to assess bridge behavior under different load conditions. From the analysis of the results, it is concluded that the wavelet principal component is the best method to smooth low and high GPS sampling frequency observations. The evaluation of the bridges reveals the ability of the GPS systems to detect the behavior and damage of structures in both the time and frequency domains.

  19. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  20. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  1. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    Science.gov (United States)

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  2. Frequency distribution of specific activities and radiological hazard assessment in surface beach sand samples collected in Bangsaen beach in Chonburi province, Thailand

    Science.gov (United States)

    Changkit, N.; Boonkrongcheep, R.; Youngchauy, U.; Polthum, S.; Kessaratikoon, P.

    2017-09-01

    The specific activities of natural radionuclides (40K, 226Ra and 232Th) in 50 surface beach sand samples collected from Bangsaen beach in Chonburi province in the easthern region of Thailand, were measured and evaluated. Experimental results were obtained by using a high-purity germanium (HPGe) detector and gamma spectrometry analysis system in the special laboratory at Thailand Institute of Nuclear Technology (Public Organization). The IAEA-SOIL-375 reference material was used to analyze the concentration of 40K, 226Ra and 232Th in all samples. It was found that the specific activities of 40K, 226Ra and 232Th were ranged from 510.85 - 771.35, 8.17 - 17.06 and 4.25 - 15.68 Bq/kg. Furthermore, frequency distribution of the specific activities were studied, analyzed and found to be the asymmetrical distribution by using a statistical computer program. Moreover, four radiological hazard indices for the investigated area were also calculated by using the median values of specific activities of 40K, 226Ra and 232Th. The results were also compared with the Office of Atoms for Peace (OAP) annual report data, Thailand and global radioactivity measurement and evaluations.

  3. Validation of a Food Frequency Questionnaire for Estimating Micronutrient Intakes in an Urban US Sample of Multi-Ethnic Pregnant Women.

    Science.gov (United States)

    Brunst, Kelly J; Kannan, Srimathi; Ni, Yu-Ming; Gennings, Chris; Ganguri, Harish B; Wright, Rosalind J

    2016-02-01

    To validate the Block98 food frequency questionnaire (FFQ) for estimating antioxidant, methyl-nutrient and polyunsaturated fatty acids (PUFA) intakes in a pregnant sample of ethnic/racial minority women in the United States (US). Participants (n = 42) were from the Programming of Intergenerational Stress Mechanisms study. Total micronutrient intakes from food and supplements was ascertained using the modified Block98 FFQ and two 24-h dietary recalls collected at random on nonconsecutive days subsequent to completion of the FFQ in mid-pregnancy. Correlation coefficients (r) corrected for attenuation from within-person variation in the recalls were calculated for antioxidants (n = 7), methyl-nutrients (n = 8), and PUFAs (n = 2). The sample was largely ethnic minorities (38 % Black, 33 % Hispanic) with 21 % being foreign born and 41 % having less than or equal to a high school degree. Significant and adequate deattenuated correlations (r ≥ 0.40) for total dietary intakes of antioxidants were observed for vitamin C, vitamin E, magnesium, and zinc. Reasonable deattenuated correlations were also observed for methyl-nutrient intakes of vitamin B6, betaine, iron, and n:6 PUFAs; however, they did not reach significance. Most women were classified into the same or adjacent quartiles (≥70 %) for total (dietary + supplements) estimates of antioxidants (5 out of 7) and methyl-nutrients (4 out of 5). The Block98 FFQ is an appropriate dietary method for evaluating antioxidants in pregnant ethnic/minorities in the US; it may be less efficient in measuring methyl-nutrient and PUFA intakes.

  4. Frequency and risk factors of blood transfusion in abdominoplasty in post-bariatric surgery patients: data from the nationwide inpatient sample.

    Science.gov (United States)

    Masoomi, Hossein; Rimler, Jonathan; Wirth, Garrett A; Lee, Christine; Paydar, Keyianoosh Z; Evans, Gregory R D

    2015-05-01

    There are limited data regarding blood transfusion following abdominoplasty, especially in post-bariatric surgery patients. The purpose of this study was to evaluate (1) the frequency and outcomes of blood transfusion in post-bariatric surgery patients undergoing abdominoplasty and (2) the predictive risk factors of blood transfusion in this patient population. Using the Nationwide Inpatient Sample database, the authors examined the clinical data of patients with a history of bariatric surgery who underwent abdominoplasty from 2007 to 2011 in the United States. A total of 20,130 post-bariatric surgery patients underwent abdominoplasty during this period. Overall, 1871 patients (9.3 percent) received blood transfusion. Chronic anemia patients had the highest rate of blood transfusion (25.6 percent). Post-bariatric surgery patients who received blood transfusion experienced a significantly higher complication rate (10.1 percent versus 4.8 percent; p blood transfusion. The blood transfusion rate in post-bariatric surgery abdominoplasty patients is not insignificant. Chronic anemia and congestive heart failure are the two major predictors of transfusion. Modifying risk factors such as anemia before abdominoplasty might significantly decrease the possibility of blood transfusion. Risk, III.

  5. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  6. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  7. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  8. Three-Dimensional Intrafractional Motion of Breast During Tangential Breast Irradiation Monitored With High-Sampling Frequency Using a Real-Time Tumor-Tracking Radiotherapy System

    International Nuclear Information System (INIS)

    Kinoshita, Rumiko; Shimizu, Shinichi; Taguchi, Hiroshi; Katoh, Norio; Fujino, Masaharu; Onimaru, Rikiya; Aoyama, Hidefumi; Katoh, Fumi; Omatsu, Tokuhiko; Ishikawa, Masayori; Shirato, Hiroki

    2008-01-01

    Purpose: To evaluate the three-dimensional intrafraction motion of the breast during tangential breast irradiation using a real-time tracking radiotherapy (RT) system with a high-sampling frequency. Methods and Materials: A total of 17 patients with breast cancer who had received breast conservation RT were included in this study. A 2.0-mm gold marker was placed on the skin near the nipple of the breast for RT. A fluoroscopic real-time tumor-tracking RT system was used to monitor the marker. The range of motion of each patient was calculated in three directions. Results: The mean ± standard deviation of the range of respiratory motion was 1.0 ± 0.6 mm (median, 0.9; 95% confidence interval [CI] of the marker position, 0.4-2.6), 1.3 ± 0.5 mm (median, 1.1; 95% CI, 0.5-2.5), and 2.6 ± 1.4 (median, 2.3; 95% CI, 1.0-6.9) for the right-left, craniocaudal, and anteroposterior direction, respectively. No correlation was found between the range of motion and the body mass index or respiratory function. The mean ± standard deviation of the absolute value of the baseline shift in the right-left, craniocaudal, and anteroposterior direction was 0.2 ± 0.2 mm (range, 0.0-0.8 mm), 0.3 ± 0.2 mm (range, 0.0-0.7 mm), and 0.8 ± 0.7 mm (range, 0.1-1.8 mm), respectively. Conclusion: Both the range of motion and the baseline shift were within a few millimeters in each direction. As long as the conventional wedge-pair technique and the proper immobilization are used, the intrafraction three-dimensional change in the breast surface did not much influence the dose distribution

  9. The frequency of Listeria monocytogenes strains recovered from clinical and non-clinical samples using phenotypic methods and confirmed by PCR

    Directory of Open Access Journals (Sweden)

    abazar pournajaf

    2013-09-01

    Full Text Available Background: Listeria monocytogenes is a facultative intracellular pathogen that causes listeriosis which has extensive clinical manifestations. Infections with L. monocytogenes are a serious threat to immunocompromised persons. The aim of this study was to determine the frequency of L. monocytogenes strains recovered from clinical and non-clinical samples using phenotypic methods and confirmed by PCR. Materials and Methods: In this study, 617 specimens were analyzed. All specimens were cultured in the specific PALCAM agar. Colonies were initially identified by routine biochemical tests. Finally, PCR assays using primers specific for inlA gene were performed. Results: In all, 46 (8.2% L. monocytogenes isolates were recovered from 617 specimens. Fourteen (8.2% strains, including 4 (7.5%, 2 (5.7%, 5 (14.2% and 3 (8.5% isolates were obtained from placental tissue, urine, vaginal and rectal swabs, respectively. In addition, 9 (7.4% strains of L. monocytogenes which were isolated from 107 different dairy products originated from cheese 5 (7.1%, cream 2 (10% and kashk 2 (11.7%, respectively. Among 11 (5.2% strains isolated from 210 different meat products, 5 (5.5%, 4 (7.2% and 2 (3% strains belonged to sausage, meat and poultry extracts, respectively. Finally, 12 (9.2% Listeria strains were recovered from 130 animal specimens that included 6 (10%, 4 (8% and 2 (10% strains from goat, sheep and cattle, respectively. Furthermore, all Listeria isolates (100% were found to be carriers of  inlA gene in PCR assay. Conclusion: The present study showed that the clinical and non-clinical specimens were contaminated with L. monocytogenes. So, it seems necessary to use a simple and standard technique such as PCR for rapid detection of this organism from various sources.

  10. Is There a Relationship Between Tic Frequency and Physiological Arousal? Examination in a Sample of Children with Co-Occurring Tic and Anxiety Disorders

    Science.gov (United States)

    Conelea, Christine A.; Ramanujam, Krishnapriya; Walther, Michael R.; Freeman, Jennifer B.; Garcia, Abbe M.

    2014-01-01

    Stress is the contextual variable most commonly implicated in tic exacerbations. However, research examining associations between tics, stressors, and the biological stress response has yielded mixed results. This study examined whether tics occur at a greater frequency during discrete periods of heightened physiological arousal. Children with co-occurring tic and anxiety disorders (n = 8) completed two stress induction tasks (discussion of family conflict, public speech). Observational (tic frequencies) and physiological (heart rate) data were synchronized using The Observer XT, and tic frequencies were compared across periods of high and low heart rate. Tic frequencies across the entire experiment did not increase during periods of higher heart rate. During the speech task, tic frequencies were significantly lower during periods of higher heart rate. Results suggest that tic exacerbations may not be associated with heightened physiological arousal and highlight the need for further tic research using integrated measurement of behavioral and biological processes. PMID:24662238

  11. Is There a Relationship Between Tic Frequency and Physiological Arousal? Examination in a Sample of Children With Co-Occurring Tic and Anxiety Disorders.

    Science.gov (United States)

    Conelea, Christine A; Ramanujam, Krishnapriya; Walther, Michael R; Freeman, Jennifer B; Garcia, Abbe M

    2014-03-01

    Stress is the contextual variable most commonly implicated in tic exacerbations. However, research examining associations between tics, stressors, and the biological stress response has yielded mixed results. This study examined whether tics occur at a greater frequency during discrete periods of heightened physiological arousal. Children with co-occurring tic and anxiety disorders (n = 8) completed two stress-induction tasks (discussion of family conflict, public speech). Observational (tic frequencies) and physiological (heart rate [HR]) data were synchronized using The Observer XT, and tic frequencies were compared across periods of high and low HR. Tic frequencies across the entire experiment did not increase during periods of higher HR. During the speech task, tic frequencies were significantly lower during periods of higher HR. Results suggest that tic exacerbations may not be associated with heightened physiological arousal and highlight the need for further tic research using integrated measurement of behavioral and biological processes. © The Author(s) 2014.

  12. Fixed bin frequency distribution for the VNTR Loci D2S44, D4S139, D5S110, and D8S358 in a population sample from Minas Gerais, Brazil

    Directory of Open Access Journals (Sweden)

    Parreira Kleber Simônio

    2002-01-01

    Full Text Available Fixed bin frequencies for the VNTR loci D2S44, D4S139, D5S110, and D8S358 were determined in a Minas Gerais population sample. The data were generated by RFLP analysis of HaeIII-digested genomic DNA and chemiluminescent detection. The four VNTR loci have met Hardy-Weinberg equilibrium, and there was no association of alleles among VNTR loci. The frequency data can be used in forensic analyses and paternity tests to estimate the frequency of a DNA profile in the general Brazilian population.

  13. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  14. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  15. Securing maximum diversity of Non Pollen Palynomorphs in palynological samples

    DEFF Research Database (Denmark)

    Enevold, Renée; Odgaard, Bent Vad

    2015-01-01

    Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important to the interpreta......Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important...

  16. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  17. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  18. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  19. Effect of current on the maximum possible reward.

    Science.gov (United States)

    Gallistel, C R; Leon, M; Waraczynski, M; Hanau, M S

    1991-12-01

    Using a 2-lever choice paradigm with concurrent variable interval schedules of reward, it was found that when pulse frequency is increased, the preference-determining rewarding effect of 0.5-s trains of brief cathodal pulses delivered to the medial forebrain bundle of the rat saturates (stops increasing) at values ranging from 200 to 631 pulses/s (pps). Raising the current lowered the saturation frequency, which confirms earlier, more extensive findings showing that the rewarding effect of short trains saturates at pulse frequencies that vary from less than 100 pps to more than 800 pps, depending on the current. It was also found that the maximum possible reward--the magnitude of the reward at or beyond the saturation pulse frequency--increases with increasing current. Thus, increasing the current reduces the saturation frequency but increases the subjective magnitude of the maximum possible reward.

  20. HLA-Cw Allele Frequency in Definite Meniere’s Disease Compared to Probable Meniere’s Disease and Healthy Controls in an Iranian Sample

    Directory of Open Access Journals (Sweden)

    Sasan Dabiri

    2016-05-01

    Full Text Available Introduction Several lines of evidence support the contribution of autoimmune mechanisms in the pathogenesis of Meniere’s disease. The aim of this study was determining the association between HLA-Cw Alleles in patients with definite Meniere’s disease and patients with probable Meniere’s disease and a control group.  Materials and Methods: HLA-Cw genotyping was performed in 23 patients with definite Meniere’s disease, 24 with probable Meniere’s disease, and 91 healthy normal subjects, using sequence specific primers polymerase chain reaction technique. The statistical analysis was performed using stata 8 software.  Results: There was a significant association between HLA-Cw*04 and HLA-Cw*16 in both definite and probable Meniere’s disease compared to normal healthy controls. We observed a significant difference in HLA-Cw*12 frequencies between patients with definite Meniere’s disease compared to patients with probable Meniere’s disease (P=0.04. The frequency of HLA-Cw*18 is significantly higher in healthy controls (P=0.002.  Conclusion: Our findings support the rule of HLA-Cw Alleles in both definite and probable Meniere’s disease. In addition, differences in HLA-Cw*12 frequency in definite and probable Meniere’s disease in our study’s population might indicate distinct immune and inflammatory mechanisms involved in each condition.

  1. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  2. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  3. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  4. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  5. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  6. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  7. The role of eating frequency on total energy intake and diet quality in a low-income, racially diverse sample of schoolchildren.

    Science.gov (United States)

    Evans, E Whitney; Jacques, Paul F; Dallal, Gerard E; Sacheck, Jennifer; Must, Aviva

    2015-02-01

    The relationship of meal and snacking patterns with overall dietary intake and relative weight in children is unclear. The current study was done to examine how eating, snack and meal frequencies relate to total energy intake and diet quality. The cross-sectional associations of eating, meal and snack frequencies with total energy intake and diet quality, measured by the Healthy Eating Index 2005 (HEI-2005), were examined in separate multivariable mixed models. Differences were examined between elementary school-age participants (9-11 years) and adolescents (12-15 years). Two non-consecutive 24 h diet recalls were collected from children attending four schools in the greater Boston area, MA, USA. One hundred and seventy-six schoolchildren, aged 9-15 years. Overall, 82% of participants consumed three daily meals. Eating, meal and snack frequencies were statistically significantly and positively associated with total energy intake. Each additional reported meal and snack was associated with an 18·5% and a 9·4% increase in total energy intake, respectively (Pquality differed by age category. In elementary school-age participants, total eating occasions and snacks increased HEI-2005 score. In adolescents, each additional meal increased HEI-2005 score by 5·40 points (P=0·01), whereas each additional snack decreased HEI-2005 score by 2·73 points (P=0·006). Findings suggest that snacking increases energy intake in schoolchildren. Snacking is associated with better diet quality in elementary school-age children and lower diet quality in adolescents. Further research is needed to elucidate the role of snacking in excess weight gain in children and adolescents.

  8. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  9. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  10. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  11. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  12. Annealing effects on electrical and optical properties of ZnO thin-film samples deposited by radio frequency-magnetron sputtering on GaAs (001) substrates

    International Nuclear Information System (INIS)

    Liu, H. F.; Chua, S. J.; Hu, G. X.; Gong, H.; Xiang, N.

    2007-01-01

    The effects of thermal annealing on Hall-effect measurement and photoluminescence (PL) from undoped n-type ZnO/GaAs thin-film samples have been studied. The evolutions of carrier concentration, electrical resistivity, and PL spectrum at various annealing conditions reveal that the dominant mechanism that affects the electrical and PL properties is dependent on the amount of thermal energy and the ambient pressure applied during the annealing process. At low annealing temperatures, annihilation of native defects is dominant in reducing the carrier concentration and weakening the low-energy tail of the main PL peak, while the GaAs substrate plays only a minor role in carrier compensations. For the higher temperatures, diffusion of Ga atoms from the GaAs substrate into ZnO film leads to a more n-type conduction of the sample. As a result, the PL exhibits a high-energy tail due to the high-level doping

  13. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  14. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  15. Frequency, prevalence, incidence and risk factors associated with visual hallucinations in a sample of patients with Parkinson's disease: a longitudinal 4-year study.

    Science.gov (United States)

    Gibson, G; Mottram, P G; Burn, D J; Hindle, J V; Landau, S; Samuel, M; Hurt, C S; Brown, R G; M Wilson, K C

    2013-06-01

    To examine the prevalence, incidence and risk factors associated with visual hallucinations (VHs) amongst people suffering from Parkinson's disease (PD). We recruited 513 patients with PD from movement disorder and PD clinics within three sites in the UK. Patients were interviewed using a series of standardised clinical rating scales at baseline, 12, 24 and 36 months. Data relating to VHs were collected using the North-East Visual Hallucinations Interview. Prevalence rates for VHs at each assessment were recorded. Associations were determined using multiple regression analysis. Cross-sectional prevalence rates for VHs at baseline, 12, 24 and 36 months indicated VHs in approximately 50% of patients. A cumulative frequency of 82.7% of cases at the end of the study period exhibited VHs. The incidence rate for VHs was 457 cases per 1000 population. Longer disease duration, greater impairment in activities of daily living and higher rates of anxiety were most commonly associated with VHs. No factors predictive of VHs could be ascertained. When examined longitudinally, VHs affect more patients than is commonly assumed in cross-sectional prevalence studies. Clinicians should routinely screen for VHs throughout the disease course. Disease duration, impairment in activities of daily living and anxiety presented as co-morbidities associated with VHs in PD, and therefore those presenting with VHs should be screened for anxiety disorder and vice versa. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  17. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  18. Molecular Recognition of Human Papilloma Virus (HPV Using Proprietary PCR Method Based on L1 Gene and the Evaluation of its Frequency in Tissue Samples from Patients with Cervical Cancer

    Directory of Open Access Journals (Sweden)

    Roohollah Dorostkar

    2015-06-01

    Full Text Available Abstract Background: In 1970, human papillomavirus (HPV was introduced as the main etiologic factor of cervical carcinoma. Since there is no possibility of detecting the virus and its subtypes using serological methods and cell culture, the molecular methods such as PCR have particular importance in accurate, early and definite diagnosis of the virus. So, in this research, our goal is to use a proprietary PCR assay based on L1 gene of human papillomavirus for molecular recognition of HPV and to evaluate its prevalence in patient samples. Materials and Methods: In this experimental study, after collecting of samples from malignant cervical lesions, the viral DNA was extracted from paraffin blocks of 50 clinical samples and PCR was done by specific primers for L1 gene of human papillomavirus in all samples. After the analysis of PCR products by 2% agarose gel electrophoresis, sensitivity and specificity of the test were also evaluated. Results: Among 50 patient samples, 33 cases were confirmed to be positive for HPV infection and 17 cases were negative, showing high frequency of HPV in this patient population (about 66%. The results of specificity assay were positive for papilloma samples and sensitivity of the assay was 20 copies of recombinant construct containing L1 per reaction. Conclusion: This study showed that PCR by specific primers for L1 gene of human papilloma virus is a proper and accurate method for detection of this virus and the results confirm the previous reports of correlation between HPV and cervical carcinoma.

  19. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  20. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  1. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  2. Variable frequency iteration MPPT for resonant power converters

    Science.gov (United States)

    Zhang, Qian; Bataresh, Issa; Chen, Lin

    2015-06-30

    A method of maximum power point tracking (MPPT) uses an MPPT algorithm to determine a switching frequency for a resonant power converter, including initializing by setting an initial boundary frequency range that is divided into initial frequency sub-ranges bounded by initial frequencies including an initial center frequency and first and second initial bounding frequencies. A first iteration includes measuring initial powers at the initial frequencies to determine a maximum power initial frequency that is used to set a first reduced frequency search range centered or bounded by the maximum power initial frequency including at least a first additional bounding frequency. A second iteration includes calculating first and second center frequencies by averaging adjacent frequent values in the first reduced frequency search range and measuring second power values at the first and second center frequencies. The switching frequency is determined from measured power values including the second power values.

  3. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  4. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  5. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  6. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  7. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  8. Duel frequency echo data acquisition system for sea-floor classification

    Digital Repository Service at National Institute of Oceanography (India)

    Navelkar, G.S.; Desai, R.G.P.; Chakraborty, B.

    An echo data acquisition system is designed to digitize echo signal from a single beam shipboard echo-sounder for use in sea-floor classification studies using a 12 bit analog to digital (A/D) card with a maximum sampling frequency of 1 MHz. Both 33...

  9. Breakfast frequency among adolescents

    DEFF Research Database (Denmark)

    Pedersen, Trine Pagh; Holstein, Bjørn E; Damsgaard, Mogens Trab

    2016-01-01

    OBJECTIVE: To investigate (i) associations between adolescents' frequency of breakfast and family functioning (close relations to parents, quality of family communication and family support) and (ii) if any observed associations between breakfast frequency and family functioning vary...... (n 3054) from a random sample of forty-one schools. RESULTS: Nearly one-quarter of the adolescents had low breakfast frequency. Low breakfast frequency was associated with low family functioning measured by three dimensions. The OR (95 % CI) of low breakfast frequency was 1·81 (1·40, 2......·33) for adolescents who reported no close relations to parents, 2·28 (1·61, 3·22) for adolescents who reported low level of quality of family communication and 2·09 (1·39, 3·15) for adolescents who reported low level of family support. Joint effect analyses suggested that the odds of low breakfast frequency among...

  10. Variation of Probable Maximum Precipitation in Brazos River Basin, TX

    Science.gov (United States)

    Bhatia, N.; Singh, V. P.

    2017-12-01

    The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was

  11. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  12. Frequency standards

    CERN Document Server

    Riehle, Fritz

    2006-01-01

    Of all measurement units, frequency is the one that may be determined with the highest degree of accuracy. It equally allows precise measurements of other physical and technical quantities, whenever they can be measured in terms of frequency.This volume covers the central methods and techniques relevant for frequency standards developed in physics, electronics, quantum electronics, and statistics. After a review of the basic principles, the book looks at the realisation of commonly used components. It then continues with the description and characterisation of important frequency standards

  13. A new algorithm for a high-modulation frequency and high-speed digital lock-in amplifier

    International Nuclear Information System (INIS)

    Jiang, G L; Yang, H; Li, R; Kong, P

    2016-01-01

    To increase the maximum modulation frequency of the digital lock-in amplifier in an online system, we propose a new algorithm using a square wave reference whose frequency is an odd sub-multiple of the modulation frequency, which is based on odd harmonic components in the square wave reference. The sampling frequency is four times the modulation frequency to insure the orthogonality of reference sequences. Only additions and subtractions are used to implement phase-sensitive detection, which speeds up the computation in lock-in. Furthermore, the maximum modulation frequency of a lock-in is enhanced considerably. The feasibility of this new algorithm is tested by simulation and experiments. (paper)

  14. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  15. Frequency Synthesiser

    NARCIS (Netherlands)

    Drago, Salvatore; Sebastiano, Fabio; Leenaerts, Dominicus M.W.; Breems, Lucien J.; Nauta, Bram

    2016-01-01

    A low power frequency synthesiser circuit (30) for a radio transceiver, the synthesiser circuit comprising: a digital controlled oscillator configured to generate an output signal having a frequency controlled by an input digital control word (DCW); a feedback loop connected between an output and an

  16. Frequency synthesiser

    NARCIS (Netherlands)

    Drago, S.; Sebastiano, Fabio; Leenaerts, Dominicus Martinus Wilhelmus; Breems, Lucien Johannes; Nauta, Bram

    2010-01-01

    A low power frequency synthesiser circuit (30) for a radio transceiver, the synthesiser circuit comprising: a digital controlled oscillator configured to generate an output signal having a frequency controlled by an input digital control word (DCW); a feedback loop connected between an output and an

  17. Investigation on maximum transition temperature of phonon mediated superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Fusui, L; Yi, S; Yinlong, S [Physics Department, Beijing University (CN)

    1989-05-01

    Three model effective phonon spectra are proposed to get plots of {ital T}{sub {ital c}}-{omega} adn {lambda}-{omega}. It can be concluded that there is no maximum limit of {ital T}{sub {ital c}} in phonon mediated superconductivity for reasonable values of {lambda}. The importance of high frequency LO phonon is also emphasized. Some discussions on high {ital T}{sub {ital c}} are given.

  18. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  19. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  20. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  1. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  2. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  3. Sampling procedures and tables

    International Nuclear Information System (INIS)

    Franzkowski, R.

    1980-01-01

    Characteristics, defects, defectives - Sampling by attributes and by variables - Sample versus population - Frequency distributions for the number of defectives or the number of defects in the sample - Operating characteristic curve, producer's risk, consumer's risk - Acceptable quality level AQL - Average outgoing quality AOQ - Standard ISQ 2859 - Fundamentals of sampling by variables for fraction defective. (RW)

  4. High frequency mesozooplankton monitoring: Can imaging systems and automated sample analysis help us describe and interpret changes in zooplankton community composition and size structure — An example from a coastal site

    Science.gov (United States)

    Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars

    2016-10-01

    The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative

  5. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  6. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  7. Frequency spirals

    International Nuclear Information System (INIS)

    Ottino-Löffler, Bertrand; Strogatz, Steven H.

    2016-01-01

    We study the dynamics of coupled phase oscillators on a two-dimensional Kuramoto lattice with periodic boundary conditions. For coupling strengths just below the transition to global phase-locking, we find localized spatiotemporal patterns that we call “frequency spirals.” These patterns cannot be seen under time averaging; they become visible only when we examine the spatial variation of the oscillators' instantaneous frequencies, where they manifest themselves as two-armed rotating spirals. In the more familiar phase representation, they appear as wobbly periodic patterns surrounding a phase vortex. Unlike the stationary phase vortices seen in magnetic spin systems, or the rotating spiral waves seen in reaction-diffusion systems, frequency spirals librate: the phases of the oscillators surrounding the central vortex move forward and then backward, executing a periodic motion with zero winding number. We construct the simplest frequency spiral and characterize its properties using analytical and numerical methods. Simulations show that frequency spirals in large lattices behave much like this simple prototype.

  8. Frequency spirals

    Energy Technology Data Exchange (ETDEWEB)

    Ottino-Löffler, Bertrand; Strogatz, Steven H., E-mail: strogatz@cornell.edu [Center for Applied Mathematics, Cornell University, Ithaca, New York 14853 (United States)

    2016-09-15

    We study the dynamics of coupled phase oscillators on a two-dimensional Kuramoto lattice with periodic boundary conditions. For coupling strengths just below the transition to global phase-locking, we find localized spatiotemporal patterns that we call “frequency spirals.” These patterns cannot be seen under time averaging; they become visible only when we examine the spatial variation of the oscillators' instantaneous frequencies, where they manifest themselves as two-armed rotating spirals. In the more familiar phase representation, they appear as wobbly periodic patterns surrounding a phase vortex. Unlike the stationary phase vortices seen in magnetic spin systems, or the rotating spiral waves seen in reaction-diffusion systems, frequency spirals librate: the phases of the oscillators surrounding the central vortex move forward and then backward, executing a periodic motion with zero winding number. We construct the simplest frequency spiral and characterize its properties using analytical and numerical methods. Simulations show that frequency spirals in large lattices behave much like this simple prototype.

  9. Analysis of the maximum likelihood channel estimator for OFDM systems in the presence of unknown interference

    Science.gov (United States)

    Dermoune, Azzouz; Simon, Eric Pierre

    2017-12-01

    This paper is a theoretical analysis of the maximum likelihood (ML) channel estimator for orthogonal frequency-division multiplexing (OFDM) systems in the presence of unknown interference. The following theoretical results are presented. Firstly, the uniqueness of the ML solution for practical applications, i.e., when thermal noise is present, is analytically demonstrated when the number of transmitted OFDM symbols is strictly greater than one. The ML solution is then derived from the iterative conditional ML (CML) algorithm. Secondly, it is shown that the channel estimate can be described as an algebraic function whose inputs are the initial value and the means and variances of the received samples. Thirdly, it is theoretically demonstrated that the channel estimator is not biased. The second and the third results are obtained by employing oblique projection theory. Furthermore, these results are confirmed by numerical results.

  10. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  11. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    Science.gov (United States)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  12. A generic statistical methodology to predict the maximum pit depth of a localized corrosion process

    International Nuclear Information System (INIS)

    Jarrah, A.; Bigerelle, M.; Guillemot, G.; Najjar, D.; Iost, A.; Nianga, J.-M.

    2011-01-01

    Highlights: → We propose a methodology to predict the maximum pit depth in a corrosion process. → Generalized Lambda Distribution and the Computer Based Bootstrap Method are combined. → GLD fit a large variety of distributions both in their central and tail regions. → Minimum thickness preventing perforation can be estimated with a safety margin. → Considering its applications, this new approach can help to size industrial pieces. - Abstract: This paper outlines a new methodology to predict accurately the maximum pit depth related to a localized corrosion process. It combines two statistical methods: the Generalized Lambda Distribution (GLD), to determine a model of distribution fitting with the experimental frequency distribution of depths, and the Computer Based Bootstrap Method (CBBM), to generate simulated distributions equivalent to the experimental one. In comparison with conventionally established statistical methods that are restricted to the use of inferred distributions constrained by specific mathematical assumptions, the major advantage of the methodology presented in this paper is that both the GLD and the CBBM enable a statistical treatment of the experimental data without making any preconceived choice neither on the unknown theoretical parent underlying distribution of pit depth which characterizes the global corrosion phenomenon nor on the unknown associated theoretical extreme value distribution which characterizes the deepest pits. Considering an experimental distribution of depths of pits produced on an aluminium sample, estimations of maximum pit depth using a GLD model are compared to similar estimations based on usual Gumbel and Generalized Extreme Value (GEV) methods proposed in the corrosion engineering literature. The GLD approach is shown having smaller bias and dispersion in the estimation of the maximum pit depth than the Gumbel approach both for its realization and mean. This leads to comparing the GLD approach to the GEV one

  13. Influence of modulation frequency in rubidium cell frequency standards

    Science.gov (United States)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  14. Maximum entropy models of ecosystem functioning

    International Nuclear Information System (INIS)

    Bertram, Jason

    2014-01-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example

  15. Maximum entropy models of ecosystem functioning

    Energy Technology Data Exchange (ETDEWEB)

    Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  16. Dynamic Performance of Maximum Power Point Trackers in TEG Systems Under Rapidly Changing Temperature Conditions

    Science.gov (United States)

    Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.

    2016-03-01

    Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG

  17. Speed Estimation in Geared Wind Turbines Using the Maximum Correlation Coefficient

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Marhadi, Kun S.; Jensen, Bogi Bech

    2015-01-01

    to overcome the above mentioned issues. The high speed stage shaft angular velocity is calculated based on the maximum correlation coefficient between the 1 st gear mesh frequency of the last gearbox stage and a pure sinus tone of known frequency and phase. The proposed algorithm utilizes vibration signals...

  18. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  19. Frequency dependence of p-mode frequency shifts induced by magnetic activity in Kepler solar-like stars

    Science.gov (United States)

    Salabert, D.; Régulo, C.; Pérez Hernández, F.; García, R. A.

    2018-04-01

    The variations of the frequencies of the low-degree acoustic oscillations in the Sun induced by magnetic activity show a dependence on radial order. The frequency shifts are observed to increase towards higher-order modes to reach a maximum of about 0.8 μHz over the 11-yr solar cycle. A comparable frequency dependence is also measured in two other main sequence solar-like stars, the F-star HD 49933, and the young 1 Gyr-old solar analog KIC 10644253, although with different amplitudes of the shifts of about 2 μHz and 0.5 μHz, respectively. Our objective here is to extend this analysis to stars with different masses, metallicities, and evolutionary stages. From an initial set of 87 Kepler solar-like oscillating stars with known individual p-mode frequencies, we identify five stars showing frequency shifts that can be considered reliable using selection criteria based on Monte Carlo simulations and on the photospheric magnetic activity proxy Sph. The frequency dependence of the frequency shifts of four of these stars could be measured for the l = 0 and l = 1 modes individually. Given the quality of the data, the results could indicate that a physical source of perturbation different from that in the Sun is dominating in this sample of solar-like stars.

  20. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  1. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  2. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  3. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  4. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  5. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  6. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  7. Maximum Available Accuracy of FM-CW Radars

    Directory of Open Access Journals (Sweden)

    V. Ricny

    2009-12-01

    Full Text Available This article deals with the principles and above all with the maximum available measuring accuracy analyse of FM-CW (Frequency Modulated Continuous Wave radars, which are usually employed for distance and velocity measurements of moving objects in road traffic, as well as air traffic and in other applications. These radars often form an important part of the active safety equipment of high-end cars – the so-called anticollision systems. They usually work in the frequency bands of mm waves (24, 35, 77 GHz. Function principles and analyses of factors, that dominantly influence the distance measurement accuracy of these equipments especially in the modulation and demodulation part, are shown in the paper.

  8. EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON ...

    African Journals Online (AJOL)

    EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON OUTPUT OF ... the use of Ordinary Least Square (OLS) estimation technique was used in analyzing ... frequency of cutting that would produce maximum output of the vegetable as ...

  9. Acoustic Imaging Frequency Dynamics of Ferroelectric Domains by Atomic Force Microscopy

    International Nuclear Information System (INIS)

    Kun-Yu, Zhao; Hua-Rong, Zeng; Hong-Zhang, Song; Sen-Xing, Hui; Guo-Rong, Li; Qing-Rui, Yin; Shimamura, Kiyoshi; Kannan, Chinna Venkadasamy; Villora, Encarnacion Antonia Garcia; Takekawa, Shunji; Kitamura, Kenji

    2008-01-01

    We report the acoustic imaging frequency dynamics of ferroelectric domains by low-frequency acoustic probe microscopy based on the commercial atomic force microscopy It is found that ferroelectric domain could be firstly visualized at lower frequency down to 0.5 kHz by AFM-based acoustic microscopy The frequency-dependent acoustic signal revealed a strong acoustic response in the frequency range from 7kHz to 10kHz, and reached maximum at 8.1kHz. The acoustic contrast mechanism can be ascribed to the different elastic response of ferroelectric microstructures to local elastic stress fields, which is induced by the acoustic wave transmitting in the sample when the piezoelectric transducer is vibrating and exciting acoustic wave under ac electric fields due to normal piezoelectric effects. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  10. Contribution to the study of maximum levels for liquid radioactive waste disposal into continental and sea water. Treatment of some typical samples; Contribution a l'etude des niveaux limites relatifs a des rejets d'effluents radioactifs liquides dans les eaux continentales et oceaniques. Traitement de quelques exemples types

    Energy Technology Data Exchange (ETDEWEB)

    Bittel, R; Mancel, J [Commissariat a l' Energie Atomique, 92 - Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire

    1968-10-01

    The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [French] Les vecteurs essentiels de la contamination radioactive de l'homme sont les aliments dans leur ensemble, et non seulement l'eau ingeree ou l'air inhale. C'est pourquoi, en accord avec l'esprit des recentes recommandations de la C.I.P.R., il est propose de substituer aux CMA la notion de niveaux limites de contamination des eaux. Dans le cas des chaines alimentaires aquatiques (organismes aquatiques et aliments irrigues), la connaissance des quantites ingerees et celle des facteurs de concentration aliments/eau permettent de determiner ces niveaux limites dans le cas de deux vecteurs primaires de contamination (eaux continentales et eaux oceaniques). Les notions de regime alimentaire critique, de radioelement critique et de formule de rejets sont envisagees, dans le meme esprit, avec le souci de tenir compte le plus possible des situations locales. (auteurs)

  11. High frequency energy measurements

    International Nuclear Information System (INIS)

    Stotlar, S.C.

    1981-01-01

    High-frequency (> 100 MHz) energy measurements present special problems to the experimenter. Environment or available electronics often limit the applicability of a given detector type. The physical properties of many detectors are frequency dependent and in some cases, the physical effect employed can be frequency dependent. State-of-the-art measurements generally involve a detection scheme in association with high-speed electronics and a method of data recording. Events can be single or repetitive shot requiring real time, sampling, or digitizing data recording. Potential modification of the pulse by the detector and the associated electronics should not be overlooked. This presentation will review typical applications, methods of choosing a detector, and high-speed detectors. Special considerations and limitations of some applications and devices will be described

  12. Automatic frequency control system for driving a linear accelerator

    International Nuclear Information System (INIS)

    Helgesson, A.L.

    1976-01-01

    An automatic frequency control system is described for maintaining the drive frequency applied to a linear accelerator to produce maximum particle output from the accelerator. The particle output amplitude is measured and the frequency of the radio frequency source powering the linear accelerator is adjusted to maximize particle output amplitude

  13. Noise and physical limits to maximum resolution of PET images

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2007-10-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.

  14. Noise and physical limits to maximum resolution of PET images

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2007-01-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners

  15. Frequency-Modulation Correlation Spectrometer

    Science.gov (United States)

    Margolis, J. S.; Martonchik, J. V.

    1985-01-01

    New type of correlation spectrometer eliminates need to shift between two cells, one empty and one containing reference gas. Electrooptical phase modulator sinusoidally shift frequencies of sample transmission spectrum.

  16. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  17. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  18. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  19. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  20. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  1. A Frequency Matching Method: Solving Inverse Problems by Use of Geologically Realistic Prior Information

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Cordua, Knud Skou

    2012-01-01

    The frequency matching method defines a closed form expression for a complex prior that quantifies the higher order statistics of a proposed solution model to an inverse problem. While existing solution methods to inverse problems are capable of sampling the solution space while taking into account...... arbitrarily complex a priori information defined by sample algorithms, it is not possible to directly compute the maximum a posteriori model, as the prior probability of a solution model cannot be expressed. We demonstrate how the frequency matching method enables us to compute the maximum a posteriori...... solution model to an inverse problem by using a priori information based on multiple point statistics learned from training images. We demonstrate the applicability of the suggested method on a synthetic tomographic crosshole inverse problem....

  2. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  3. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  4. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  5. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  6. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  7. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  8. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  9. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  10. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  11. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  12. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  13. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  14. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  15. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  16. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  17. The Time Analysis and Frequency Distribution of Caesium-137 Fall-Out in Muscle Samples; Analyse en Fonction du Temps et Distribution des Frequences de Cesium 137 du aux Retombees de Cesium 137 Contenues dans des Echantillons de Tissu Musculaire; 0410 041d 0414 ; Analisis Temporal y Distribucion de Frecuencias del Cesio-137 Procedente de la Precipitacion Radiactiva en Muestras de Tejido Muscular

    Energy Technology Data Exchange (ETDEWEB)

    Ellett, W. H.; Brownell, G. L [Physics Research Laboratory, Massachusetts General Hospital, Boston 14, MA (United States)

    1964-11-15

    For low concentrations of artificial radioactivity in the body, detrimental effect will be most likely in that fraction of the population having many times the average amount. A meaningful evaluation of the nuclear fall-out hazard can only be made if the frequency distribution of radioactivity in the population is known. Attempts to determine the shape of the distribution curve from Kulp's data on strontium-90 concentration in children's bones have met limited success because of the small sample size and lack of strontium-90-calcium equilibrium in bone. To overcome these limitations, we have measured the caesium-137 content in approximately 900 muscle samples. These tissue samples were removed during post-mortem operations from January 1959 to August 1963. The use of caesium-137 as a fission product monitor assures that all members of the group, regardless of their age, were essentially in equilibrium with the radioactive environment at the time of death. The period of investigation coincides with the first weapon test moratorium and the resumption of large- scale testing in the fall of 1961. Average caesium-137 in the samples was relatively constant throughout 1959, decreased a factor of two during 1960, and remained relatively stable until the early summer of 1962. Since mid-1962 the average level of caesium-137 radioactivity in the sample population has steadily increased and was four times greater than the 1962 minimum by the summer of 1963. Time-independent histograms of the data have been assembled by fitting a polynomial to the raw data (sample radioactivity as a function of data of death). This pooled data has been tested statistically against normal, log-normal, and gamma frequency distributions. Results indicate that the experimental distribution is definitely not Gaussian and is best fitted by a gamma distribution. By using the empirically derived gamma distribution it is possible to predict the fraction of the population having N times the average

  18. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  19. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  20. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  1. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  2. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  3. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  4. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  5. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  6. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  7. Maximum wind energy extraction strategies using power electronic converters

    Science.gov (United States)

    Wang, Quincy Qing

    2003-10-01

    This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through

  8. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  9. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  10. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  11. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  12. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  13. Theoretical estimates of maximum fields in superconducting resonant radio frequency cavities: stability theory, disorder, and laminates

    Science.gov (United States)

    Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.

    2017-03-01

    Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.

  14. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  15. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  16. Generalized sampling in Julia

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Nielsen, Morten; Rasmussen, Morten Grud

    2017-01-01

    Generalized sampling is a numerically stable framework for obtaining reconstructions of signals in different bases and frames from their samples. For example, one can use wavelet bases for reconstruction given frequency measurements. In this paper, we will introduce a carefully documented toolbox...... for performing generalized sampling in Julia. Julia is a new language for technical computing with focus on performance, which is ideally suited to handle the large size problems often encountered in generalized sampling. The toolbox provides specialized solutions for the setup of Fourier bases and wavelets....... The performance of the toolbox is compared to existing implementations of generalized sampling in MATLAB....

  17. GHz band frequency hopping PLL-based frequency synthesizers

    Institute of Scientific and Technical Information of China (English)

    XU Yong; WANG Zhi-gong; GUAN Yu; XU Zhi-jun; QIAO Lu-feng

    2005-01-01

    In this paper we describe a full-integrated circuit containing all building blocks of a completed PLL-based synthesizer except for low pass filter(LPF).The frequency synthesizer is designed for a frequency hopping (FH) transceiver operating up to 1.5 GHz as a local oscillator. The architecture of Voltage Controlled Oscillator (VCO) is optimized to get better performance, and a phase noise of -111.85-dBc/Hz @ 1 MHz and a tuning range of 250 MHz are gained at a centre frequency of 1.35 GHz.A novel Dual-Modulus Prescaler(DMP) is designed to achieve a very low jitter and a lower power.The settling time of PLL is 80 μs while the reference frequency is 400 KHz.This monolithic frequency synthesizer is to integrate all main building blocks of PLL except for the low pass filter,with a maximum VCO output frequency of 1.5 GHz,and is fabricated with a 0.18 μm mixed signal CMOS process. Low power dissipation, low phase noise, large tuning range and fast settling time are gained in this design.

  18. Elemental composition of cosmic rays using a maximum likelihood method

    International Nuclear Information System (INIS)

    Ruddick, K.

    1996-01-01

    We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)

  19. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  20. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  1. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  2. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  3. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  4. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  5. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  6. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  7. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  8. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The estimation of probable maximum precipitation: the case of Catalonia.

    Science.gov (United States)

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  10. Constraints on pulsar masses from the maximum observed glitch

    Science.gov (United States)

    Pizzochero, P. M.; Antonelli, M.; Haskell, B.; Seveso, S.

    2017-07-01

    Neutron stars are unique cosmic laboratories in which fundamental physics can be probed in extreme conditions not accessible to terrestrial experiments. In particular, the precise timing of rotating magnetized neutron stars (pulsars) reveals sudden jumps in rotational frequency in these otherwise steadily spinning-down objects. These 'glitches' are thought to be due to the presence of a superfluid component in the star, and offer a unique glimpse into the interior physics of neutron stars. In this paper we propose an innovative method to constrain the mass of glitching pulsars, using observations of the maximum glitch observed in a star, together with state-of-the-art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. We study the properties of a physically consistent angular momentum reservoir of pinned vorticity, and we find a general inverse relation between the size of the maximum glitch and the pulsar mass. We are then able to estimate the mass of all the observed glitchers that have displayed at least two large events. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the superfluid properties of dense hadronic matter in neutron star interiors.

  11. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  12. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  13. Maximum likelihood sequence estimation for optical complex direct modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  14. A Maximum Entropy Approach to Loss Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marco Bee

    2013-03-01

    Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

  15. Frequency, antimicrobial susceptibility and clonal distribution of methicillin-resistant Staphylococcus pseudintermedius in canine clinical samples submitted to a veterinary diagnostic laboratory in Italy: A 3-year retrospective investigation

    DEFF Research Database (Denmark)

    Ventrella, G.; Moodley, A.; Grandolfo, E.

    2017-01-01

    In the last decade there has been a rapid global spread of methicillin-resistant Staphylococcus pseudintermedius (MRSP) clones displaying multidrug resistance in dogs. We investigated prevalence, antimicrobial susceptibility and clonal distribution of MRSP isolated from clinical canine samples be...

  16. Finite Sample Comparison of Parametric, Semiparametric, and Wavelet Estimators of Fractional Integration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ø.; Frederiksen, Per Houmann

    2005-01-01

    In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...

  17. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  18. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  19. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    Science.gov (United States)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for

  20. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  1. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  2. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  3. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  4. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  5. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  6. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  7. Frequency position modulation using multi-spectral projections

    Science.gov (United States)

    Goodman, Joel; Bertoncini, Crystal; Moore, Michael; Nousain, Bryan; Cowart, Gregory

    2012-10-01

    In this paper we present an approach to harness multi-spectral projections (MSPs) to carefully shape and locate tones in the spectrum, enabling a new and robust modulation in which a signal's discrete frequency support is used to represent symbols. This method, called Frequency Position Modulation (FPM), is an innovative extension to MT-FSK and OFDM and can be non-uniformly spread over many GHz of instantaneous bandwidth (IBW), resulting in a communications system that is difficult to intercept and jam. The FPM symbols are recovered using adaptive projections that in part employ an analog polynomial nonlinearity paired with an analog-to-digital converter (ADC) sampling at a rate at that is only a fraction of the IBW of the signal. MSPs also facilitate using commercial of-the-shelf (COTS) ADCs with uniform-sampling, standing in sharp contrast to random linear projections by random sampling, which requires a full Nyquist rate sample-and-hold. Our novel communication system concept provides an order of magnitude improvement in processing gain over conventional LPI/LPD communications (e.g., FH- or DS-CDMA) and facilitates the ability to operate in interference laden environments where conventional compressed sensing receivers would fail. We quantitatively analyze the bit error rate (BER) and processing gain (PG) for a maximum likelihood based FPM demodulator and demonstrate its performance in interference laden conditions.

  8. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  9. Frequency noise in frequency swept fiber laser

    DEFF Research Database (Denmark)

    Pedersen, Anders Tegtmeier; Rottwitt, Karsten

    2013-01-01

    This Letter presents a measurement of the spectral content of frequency shifted pulses generated by a lightwave synthesized frequency sweeper. We found that each pulse is shifted in frequency with very high accuracy. We also discovered that noise originating from light leaking through the acousto......- optical modulators and forward propagating Brillouin scattering appear in the spectrum. © 2013 Optical Society of America....

  10. Active Faraday optical frequency standard.

    Science.gov (United States)

    Zhuang, Wei; Chen, Jingbiao

    2014-11-01

    We propose the mechanism of an active Faraday optical clock, and experimentally demonstrate an active Faraday optical frequency standard based on narrow bandwidth Faraday atomic filter by the method of velocity-selective optical pumping of cesium vapor. The center frequency of the active Faraday optical frequency standard is determined by the cesium 6 (2)S(1/2) F=4 to 6 (2)P(3/2) F'=4 and 5 crossover transition line. The optical heterodyne beat between two similar independent setups shows that the frequency linewidth reaches 281(23) Hz, which is 1.9×10(4) times smaller than the natural linewidth of the cesium 852-nm transition line. The maximum emitted light power reaches 75 μW. The active Faraday optical frequency standard reported here has advantages of narrow linewidth and reduced cavity pulling, which can readily be extended to other atomic transition lines of alkali and alkaline-earth metal atoms trapped in optical lattices at magic wavelengths, making it useful for new generation of optical atomic clocks.

  11. Gamma radiation effects on the frequency of toxigenic fungus on sene (Cassia angustifolia) and green tea (Camelia sinensis) samples; Efeito da radiacao gama na frequencia de fungos toxigenicos em amostras de sene (Cassia angustifolia) e cha verde (Camellia sinensis)

    Energy Technology Data Exchange (ETDEWEB)

    Aquino, S.; Villavicencio, A.L.C.H. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Centro de Tecnologia das Radiacoes]. E-mail: siaq06@hotmail.com; Reis, T.A.; Zorzete, P.; Correa, B. [Universidade de Sao Paulo (USP), SP (Brazil). Inst. de Ciencias Biomedicas. Dept. de Microbiologia; Goncalez, E.; Rossi, M.H. [Instituto Biologico, Sao Paulo, SP (Brazil)

    2006-11-15

    The levels of contamination and gamma radiation effects were analyzed in the reduction of toxigenic filamentous fungus in two types of medicinal plants. Aspergillus and Penicillium were the predominant genders and 73,80% of the samples showed high levels of fungus contamination.

  12. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.

    Directory of Open Access Journals (Sweden)

    Lorenzo Asti

    2016-04-01

    Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.

  13. Percentiles of the null distribution of 2 maximum lod score tests.

    Science.gov (United States)

    Ulgen, Ayse; Yoo, Yun Joo; Gordon, Derek; Finch, Stephen J; Mendell, Nancy R

    2004-01-01

    We here consider the null distribution of the maximum lod score (LOD-M) obtained upon maximizing over transmission model parameters (penetrance values, dominance, and allele frequency) as well as the recombination fraction. Also considered is the lod score maximized over a fixed choice of genetic model parameters and recombination-fraction values set prior to the analysis (MMLS) as proposed by Hodge et al. The objective is to fit parametric distributions to MMLS and LOD-M. Our results are based on 3,600 simulations of samples of n = 100 nuclear families ascertained for having one affected member and at least one other sibling available for linkage analysis. Each null distribution is approximately a mixture p(2)(0) + (1 - p)(2)(v). The values of MMLS appear to fit the mixture 0.20(2)(0) + 0.80chi(2)(1.6). The mixture distribution 0.13(2)(0) + 0.87chi(2)(2.8). appears to describe the null distribution of LOD-M. From these results we derive a simple method for obtaining critical values of LOD-M and MMLS. Copyright 2004 S. Karger AG, Basel

  14. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  15. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  16. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  17. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  18. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  19. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  20. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  1. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  2. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  3. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  4. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  5. Freqüência de enteroparasitas em amostras de alface (Lactuca sativa comercializadas em Lavras, Minas Gerais Frequency of intestinal parasites in samples of lettuce (Lactuca sativa commercialized in Lavras, Minas Gerais State

    Directory of Open Access Journals (Sweden)

    Antônio Marcos Guimarães

    2003-10-01

    Full Text Available O objetivo deste trabalho foi realizar uma avaliação parasitológica em amostras de alface (Lactuca sativa comercializadas em Lavras, MG. As amostras de alfaces apresentaram baixos padrões higiênicos, indicados pela presença de formas parasitológicas de origem animal ou humana e alta concentração de coliformes fecais.The aim of this study was to evaluate the parasitological contamination in samples of lettuce (Lactuca sativa commercialized in Lavras city, Minas Gerais. The samples of lettuce showed low hygienic conditions, indicated by the presence of parasites of animal or human origin and high concentration of fecal coliforms.

  6. 40 CFR 141.803 - Coliform sampling.

    Science.gov (United States)

    2010-07-01

    ...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Aircraft Drinking Water Rule § 141.803 Coliform sampling. (a... aircraft water system, the sampling frequency must be determined by the disinfection and flushing frequency... disinfection and flushing frequency recommended by the aircraft water system manufacturer, when available...

  7. Maximum a posteriori covariance estimation using a power inverse wishart prior

    DEFF Research Database (Denmark)

    Nielsen, Søren Feodor; Sporring, Jon

    2012-01-01

    The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...

  8. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  9. Multi-frequency excitation

    KAUST Repository

    Younis, Mohammad I.

    2016-01-01

    Embodiments of multi-frequency excitation are described. In various embodiments, a natural frequency of a device may be determined. In turn, a first voltage amplitude and first fixed frequency of a first source of excitation can be selected

  10. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  11. Underwater Sediment Sampling Research

    Science.gov (United States)

    2017-01-01

    impacted sediments was found to be directly related to the concentration of crude oil detected in the sediment pore waters . Applying this mathematical...Kurt.A.Hansen@uscg.mil. 16. Abstract (MAXIMUM 200 WORDS ) The USCG R&D Center sought to develop a bench top system to determine the amount of total...scattered. The approach here is to sample the interstitial water between the grains of sand and attempt to determine the amount of oil in and on

  12. Distribution of phytoplankton groups within the deep chlorophyll maximum

    KAUST Repository

    Latasa, Mikel

    2016-11-01

    The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.

  13. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  14. Detection of Ochratoxin A in bread samples in Shahrekord city, Iran, 2011-2012

    Directory of Open Access Journals (Sweden)

    Mehran Erfani

    2013-12-01

    Results: Ochratoxin A was detected in 45 out of the 86 bread samples (52.3%. Levels of OTA in positive samples ranged between 0.19 and 10.37 ng/g and the average contamination of all positive samples was 3.04 ng/g . The highest frequency of positive samples was related to machinery Taftoon (88.8% and Lavash bread (81.8%. The most contaminated sample (5.39 ng/g was found in the Iranian Lavash bread. Fifteen of the positive samples exceed the maximum level of 5 ng/g set by European regulations for OTA in cereal and bread. Conclusion: The results of this study indicated that contamination levels of ochratoxin A were high in part of the samples (17.4%. Bread and cereals are considered to be the main and predominant ingredient of Iranian food; therefore, their contamination can have long-term negative impact on people's health.

  15. The presence of enterovirus, adenovirus, and parvovirus B19 in myocardial tissue samples from autopsies: an evaluation of their frequencies in deceased individuals with myocarditis and in non-inflamed control hearts.

    Science.gov (United States)

    Nielsen, Trine Skov; Hansen, Jakob; Nielsen, Lars Peter; Baandrup, Ulrik Thorngren; Banner, Jytte

    2014-09-01

    Multiple viruses have been detected in cardiac tissue, but their role in causing myocarditis remains controversial. Viral diagnostics are increasingly used in forensic medicine, but the interpretation of the results can sometimes be challenging. In this study, we examined the prevalence of adenovirus, enterovirus, and parvovirus B19 (PVB) in myocardial autopsy samples from myocarditis related deaths and in non-inflamed control hearts in an effort to clarify their significance as the causes of myocarditis in a forensic material. We collected all autopsy cases diagnosed with myocarditis from 1992 to 2010. Eighty-four suicidal deaths with morphologically normal hearts served as controls. Polymerase chain reaction was used for the detection of the viral genomes (adenovirus, enterovirus, and PVB) in myocardial tissue specimens. The distinction between acute and persistent PVB infection was made by the serological determination of PVB-specific immunoglobulins M and G. PVB was detected in 33 of 112 (29 %) myocarditis cases and 37 of 84 (44 %) control cases. All of the samples were negative for the presence of adenovirus and enterovirus. Serological evidence of an acute PVB infection, determined by the presence of immunoglobulin M, was only present in one case. In the remaining cases, PVB was considered to be a bystander with no or limited association to myocardial inflammation. In this study, adenovirus, enterovirus, and PVB were found to be rare causes of myocarditis. The detection of PVB in myocardial autopsy samples most likely represents a persistent infection with no or limited association with myocardial inflammation. The forensic investigation of myocardial inflammation demands a thorough examination, including special attention to non-viral causes and requires a multidisciplinary approach.

  16. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  17. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  18. Maximum-performance fiber-optic irradiation with nonimaging designs.

    Science.gov (United States)

    Fang, Y; Feuermann, D; Gordon, J M

    1997-10-01

    A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.

  19. A silicon pad shower maximum detector for a Shashlik calorimeter

    International Nuclear Information System (INIS)

    Alvsvaag, S.J.; Maeland, O.A.; Klovning, A.

    1995-01-01

    The new luminosity monitor of the DELPHI detector, STIC (Small angle TIle Calorimeter), was built using a Shashlik technique. This technique does not provide longitudinal sampling of the showers, which limits the measurement of the direction of the incident particles and the e-π separation. For these reasons STIC was equipped with a Silicon Pad Shower Maximum Detector (SPSMD). In order to match the silicon detectors to the Shashlick read out by wavelength shifter (WLS) fibers, the silicon wafers had to be drilled with a precision better than 10μm without damaging the active area of the detectors. This paper describes the SPSMD with emphasis on the fabrication techniques and on the components used. Some preliminary results of the detector performance from data taken with a 45GeV electron beam at CERN are presented. (orig.)

  20. Exact sampling from conditional Boolean models with applications to maximum likelihood inference

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Zwet, van E.W.

    2001-01-01

    We are interested in estimating the intensity parameter of a Boolean model of discs (the bombing model) from a single realization. To do so, we derive the conditional distribution of the points (germs) of the underlying Poisson process. We demonstrate how to apply coupling from the past to generate

  1. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  2. Resonant magnetic pumping at very low frequency

    International Nuclear Information System (INIS)

    Canobbio, Ernesto

    1978-01-01

    We propose to exploit for plasma heating purposes the very low frequency limit of the Alfven wave resonance condition, which reduces essentially to safety factor q=m/n, a rational number. It is shown that a substantial fraction of the total RF-energy can be absorbed by the plasma. The lowest possible frequency value is determined by the maximum tolerable width of the RF-magnetic islands which develop near the singular surface. The obvious interest of the proposed scheme is the low frequency value (f<=10 KHz) which allows the RF-coils to be protected by stainless steel or even to be put outside the liner

  3. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  4. Influence of the matrix composition in the Emission spectroscopic analysis of solutions with continuous nebulization of the sample and excitation of the spectrum in a high-frequency inductively coupled plasma discharge

    International Nuclear Information System (INIS)

    Zil'bershtein, K.I.; Kartasheva, M.A.; Mushkovich, G.N.

    1986-01-01

    Numerous investigations have shown that the emission spectroscopic analysis of solutions with the use of a high-frequency inductively coupled plasma discharge is characterized by the absence of an influence or the presence of a small influence of the matrix elements on the amplitudes of the analytical signals of the elements being determined and, consequently, on the results of an analysis. The influences of the first type include the influences of easily ionized elements, which, as we know, are very significant in spectroscopic analysis with the use of traditional sources for the excitation of the spectra (arcs, sparks, and flames). The influences of the second type include, in particular, the influences associated with changes in the viscosity and surface tension of the solutions as the concentration of the matrix elements in the solutions is increased. In order to obtain correct results from an analysis in the case of spectral interference, special methods for treating the spectra are used for the purpose of taking into account the background and the instances of superposition and establishing the true (''pure'') intensity of the analytical lines. Results are shown of the Determination of impurities in aqueous solutions of sodium Molybate Single Crystals (the concentration of Na 2 Mo 2 O 7 in the solution was 1 mg/ml). When the solutions being analyzed are diluted to a concentration of Na 2 Mo 2 O 7 equal to 1 mg/ml, good agreement between the results of the determination of vanadium (and other impurities) obtained with the use of reference solutions not containing the matrix elements and the results obtained with the use of reference solutions containing the matrix elements in the same concentrations as in the solutions being analyzed (1mg/ml) was guaranteed

  5. Analysis of monazite samples

    International Nuclear Information System (INIS)

    Kartiwa Sumadi; Yayah Rohayati

    1996-01-01

    The 'monazit' analytical program has been set up for routine work of Rare Earth Elements analysis in the monazite and xenotime minerals samples. Total relative error of the analysis is very low, less than 2.50%, and the reproducibility of counting statistic and stability of the instrument were very excellent. The precision and accuracy of the analytical program are very good with the maximum percentage relative are 5.22% and 1.61%, respectively. The mineral compositions of the 30 monazite samples have been also calculated using their chemical constituents, and the results were compared to the grain counting microscopic analysis

  6. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  7. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  8. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  9. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  10. Using Maximum Entropy to Find Patterns in Genomes

    Science.gov (United States)

    Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.

  11. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  12. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  13. Y-STR frequency surveying method

    DEFF Research Database (Denmark)

    Willuweit, Sascha; Caliebe, Amke; Andersen, Mikkel Meyer

    2011-01-01

    Reasonable formalized methods to estimate the frequencies of DNA profiles generated from lineage markers have been proposed in the past years and were discussed in the forensic community. Recently, collections of population data on the frequencies of variations in Y chromosomal STR profiles have...... reached a new quality with the establishment of the comprehensive neatly quality-controlled reference database YHRD. Grounded on such unrivalled empirical material from hundreds of populations studies the core assumption of the Haplotype Frequency Surveying Method originally described 10 years ago can...... be tested and improved. Here we provide new approaches to calculate the parameters used in the frequency surveying method: a maximum likelihood estimation of the regression parameters (r1, r2, s1 and s2) and a revised Frequency Surveying framework with variable binning and a database preprocessing to take...

  14. Self-reported sleep disturbances due to railway noise: exposure-response relationships for nighttime equivalent and maximum noise levels.

    Science.gov (United States)

    Aasvang, Gunn Marit; Moum, Torbjorn; Engdahl, Bo

    2008-07-01

    The objective of the present survey was to study self-reported sleep disturbances due to railway noise with respect to nighttime equivalent noise level (L(p,A,eq,night)) and maximum noise level (L(p,A,max)). A sample of 1349 people in and around Oslo in Norway exposed to railway noise was studied in a cross-sectional survey to obtain data on sleep disturbances, sleep problems due to noise, and personal characteristics including noise sensitivity. Individual noise exposure levels were determined outside of the bedroom facade, the most-exposed facade, and inside the respondents' bedrooms. The exposure-response relationships were analyzed by using logistic regression models, controlling for possible modifying factors including the number of noise events (train pass-by frequency). L(p,A,eq,night) and L(p,A,max) were significantly correlated, and the proportion of reported noise-induced sleep problems increased as both L(p,A,eq,night) and L(p,A,max) increased. Noise sensitivity, type of bedroom window, and pass-by frequency were significant factors affecting noise-induced sleep disturbances, in addition to the noise exposure level. Because about half of the study population did not use a bedroom at the most-exposed side of the house, the exposure-response curve obtained by using noise levels for the most-exposed facade underestimated noise-induced sleep disturbance for those who actually have their bedroom at the most-exposed facade.

  15. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  16. Multi-frequency excitation

    KAUST Repository

    Younis, Mohammad I.

    2016-03-10

    Embodiments of multi-frequency excitation are described. In various embodiments, a natural frequency of a device may be determined. In turn, a first voltage amplitude and first fixed frequency of a first source of excitation can be selected for the device based on the natural frequency. Additionally, a second voltage amplitude of a second source of excitation can be selected for the device, and the first and second sources of excitation can be applied to the device. After applying the first and second sources of excitation, a frequency of the second source of excitation can be swept. Using the methods of multi- frequency excitation described herein, new operating frequencies, operating frequency ranges, resonance frequencies, resonance frequency ranges, and/or resonance responses can be achieved for devices and systems.

  17. Frequency selectivity at very low centre frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Marquardt, Torsten

    2010-01-01

    measurements based on OAE suppression techniques and notched-noise masking data psychophysically measured for centre frequencies in the range 50-125 Hz, this study examines how individual differences in frequency selectivity, as well as in masking, may occur at very low CFs due to individual differences...

  18. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  19. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  20. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  1. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  2. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  3. Analysis of Time and Frequency Domain Pace Algorithms for OFDM with Virtual Subcarriers

    DEFF Research Database (Denmark)

    Rom, Christian; Manchón, Carles Navarro; Deneire, Luc

    2007-01-01

    This paper studies common linear frequency direction pilot-symbol aided channel estimation algorithms for orthogonal frequency division multiplexing in a UTRA long term evolution context. Three deterministic algorithms are analyzed: the maximum likelihood (ML) approach, the noise reduction algori...

  4. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    Science.gov (United States)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  5. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  6. Estimate of annual daily maximum rainfall and intense rain equation for the Formiga municipality, MG, Brazil

    Directory of Open Access Journals (Sweden)

    Giovana Mara Rodrigues Borges

    2016-11-01

    Full Text Available Knowledge of the probabilistic behavior of rainfall is extremely important to the design of drainage systems, dam spillways, and other hydraulic projects. This study therefore examined statistical models to predict annual daily maximum rainfall as well as models of heavy rain for the city of Formiga - MG. To do this, annual maximum daily rainfall data were ranked in decreasing order that best describes the statistical distribution by exceedance probability. Daily rainfall disaggregation methodology was used for the intense rain model studies and adjusted with Intensity-Duration-Frequency (IDF and Exponential models. The study found that the Gumbel model better adhered to the data regarding observed frequency as indicated by the Chi-squared test, and that the exponential model best conforms to the observed data to predict intense rains.

  7. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  8. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  9. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  10. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  11. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  12. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  13. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  14. Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh

    Science.gov (United States)

    Mortuza, M. R.; Demissie, Y.; Li, H. Y.

    2014-12-01

    Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.

  15. Environmental sampling

    International Nuclear Information System (INIS)

    Puckett, J.M.

    1998-01-01

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation

  16. The Influence of Creatine Monohydrate on Strength and Endurance After Doing Physical Exercise With Maximum Intensity

    Directory of Open Access Journals (Sweden)

    Asrofi Shicas Nabawi

    2017-11-01

    Full Text Available The purpose of this study was: (1 to analyze the effect of creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (2 to analyze the effect of non creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (3 to analyze the results of the difference by administering creatine and non creatine on strength and endurance after exercise with maximum intensity. This type of research used in this research was quantitative with quasi experimental research methods. The design of this study was using pretest and posttest control group design, and data analysis was using a paired sample t-test. The process of data collection was done with the test leg muscle strength using a strength test with back and leg dynamometer, sit ups test with 1 minute sit ups, push ups test with push ups and 30 seconds with a VO2max test cosmed quart CPET during the pretest and posttest. Furthermore, the data were analyzed using SPSS 22.0 series. The results showed: (1 There was the influence of creatine administration against the strength after doing exercise with maximum intensity; (2 There was the influence of creatine administration against the group endurance after doing exercise with maximum intensity; (3 There was the influence of non creatine against the force after exercise maximum intensity; (4 There was the influence of non creatine against the group after endurance exercise maximum intensity; (5 The significant difference with the provision of non creatine and creatine from creatine group difference delta at higher against the increased strength and endurance after exercise maximum intensity. Based on the above analysis, it can be concluded that the increased strength and durability for each of the groups after being given a workout.

  17. AUTOMATIC FREQUENCY CONTROL SYSTEM

    Science.gov (United States)

    Hansen, C.F.; Salisbury, J.D.

    1961-01-10

    A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.

  18. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  19. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  20. [The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].

    Science.gov (United States)

    Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R

    1996-02-01

    To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.

  1. Searching for chaos on low frequency

    OpenAIRE

    Nicolas Wesner

    2004-01-01

    A new method for detecting low dimensional chaos in small sample sets is presented. The method is applied to financial data on low frequency (annual and monthly) for which few observations are available.

  2. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  3. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  4. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  5. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  6. Static electromagnetic frequency changers

    CERN Document Server

    Rozhanskii, L L

    1963-01-01

    Static Electromagnetic Frequency Changers is about the theory, design, construction, and applications of static electromagnetic frequency changers, devices that used for multiplication or division of alternating current frequency. It is originally published in the Russian language. This book is organized into five chapters. The first three chapters introduce the readers to the principles of operation, the construction, and the potential applications of static electromagnetic frequency changers and to the principles of their design. The two concluding chapters use some hitherto unpublished work

  7. Effect of Solar Radiation on Viscoelastic Properties of Bovine Leather: Temperature and Frequency Scans

    Science.gov (United States)

    Nalyanya, Kallen Mulilo; Rop, Ronald K.; Onyuka, Arthur S.

    2017-04-01

    This work presents both analytical and experimental results of the effect of unfiltered natural solar radiation on the thermal and dynamic mechanical properties of Boran bovine leather at both pickling and tanning stages of preparation. Samples cut from both pickled and tanned pieces of leather of appropriate dimensions were exposed to unfiltered natural solar radiation for time intervals ranging from 0 h (non-irradiated) to 24 h. The temperature of the dynamic mechanical analyzer was equilibrated at 30°C and increased to 240°C at a heating rate of 5°C \\cdot Min^{-1}, while its oscillation frequency varied from 0.1 Hz to 100 Hz. With the help of thermal analysis (TA) control software which analyzes and generates parameter means/averages at temperature/frequency range, the graphs were created by Microsoft Excel 2013 from the means. The viscoelastic properties showed linear frequency dependence within 0.1 Hz to 30 Hz followed by negligible frequency dependence above 30 Hz. Storage modulus (E') and shear stress (σ ) increased with frequency, while loss modulus (E''), complex viscosity (η ^{*}) and dynamic shear viscosity (η) decreased linearly with frequency. The effect of solar radiation was evident as the properties increased initially from 0 h to 6 h of irradiation followed by a steady decline to a minimum at 18 h before a drastic increase to a maximum at 24 h. Hence, tanning industry can consider the time duration of 24 h for sun-drying of leather to enhance the mechanical properties and hence the quality of the leather. At frequencies higher than 30 Hz, the dynamic mechanical properties are independent of the frequency. The frequency of 30 Hz was observed to be a critical value in the behavior in the mechanical properties of bovine hide.

  8. Eastern Frequency Response Study

    Energy Technology Data Exchange (ETDEWEB)

    Miller, N.W.; Shao, M.; Pajic, S.; D' Aquila, R.

    2013-05-01

    This study was specifically designed to investigate the frequency response of the Eastern Interconnection that results from large loss-of-generation events of the type targeted by the North American Electric Reliability Corp. Standard BAL-003 Frequency Response and Frequency Bias Setting (NERC 2012a), under possible future system conditions with high levels of wind generation.

  9. DDC Descriptor Frequencies.

    Science.gov (United States)

    Klingbiel, Paul H.; Jacobs, Charles R.

    This report summarizes the frequency of use of the 7144 descriptors used for indexing technical reports in the Defense Documentation Center (DDC) collection. The descriptors are arranged alphabetically in the first section and by frequency in the second section. The frequency data cover about 427,000 AD documents spanning the interval from March…

  10. Measuring Coupling of Rhythmical Time Series Using Cross Sample Entropy and Cross Recurrence Quantification Analysis

    Directory of Open Access Journals (Sweden)

    John McCamley

    2017-01-01

    Full Text Available The aim of this investigation was to compare and contrast the use of cross sample entropy (xSE and cross recurrence quantification analysis (cRQA measures for the assessment of coupling of rhythmical patterns. Measures were assessed using simulated signals with regular, chaotic, and random fluctuations in frequency, amplitude, and a combination of both. Biological data were studied as models of normal and abnormal locomotor-respiratory coupling. Nine signal types were generated for seven frequency ratios. Fifteen patients with COPD (abnormal coupling and twenty-one healthy controls (normal coupling walked on a treadmill at three speeds while breathing and walking were recorded. xSE and the cRQA measures of percent determinism, maximum line, mean line, and entropy were quantified for both the simulated and experimental data. In the simulated data, xSE, percent determinism, and entropy were influenced by the frequency manipulation. The 1 : 1 frequency ratio was different than other frequency ratios for almost all measures and/or manipulations. The patients with COPD used a 2 : 3 ratio more often and xSE, percent determinism, maximum line, mean line, and cRQA entropy were able to discriminate between the groups. Analysis of the effects of walking speed indicated that all measures were able to discriminate between speeds.

  11. Fungic microflora of Panicum maximum and Styosanthes spp. commercial seed / Microflora fúngica de sementes comerciais de Panicum maximum e Stylosanthes spp.

    Directory of Open Access Journals (Sweden)

    Larissa Rodrigues Fabris

    2010-09-01

    Full Text Available The sanitary quality of 26 lots commercial seeds of tropical forages, produced in different regions (2004-05 and 2005-06 was analyzed. The lots were composed of seeds of Panicum maximum ('Massai', 'Mombaça' e 'Tanzânia' and stylo ('Estilosantes Campo Grande' - ECG. Additionally, seeds of two lots of P. maximum for exportation were analyzed. The blotter test was used, at 20ºC under alternating light and darkness in a 12 h photoperiod, for seven days. The Aspergillus, Cladosporium and Rhizopus genus consisted the secondary or saprophytes fungi (FSS with greatest frequency in P. maximum lots. In general, there was low incidence of these fungus in the seeds. In relation to pathogenic fungi (FP, it was detected high frequency of contaminated lots by Bipolaris, Curvularia, Fusarium and Phoma genus. Generally, there was high incidence of FP in P. maximum seeds. The occurrence of Phoma sp. was hight, because in 81% of the lots showed incidence superior to 50%. In 'ECG' seeds it was detected FSS (Aspergillus, Cladosporium, and Penicillium genus and FP (Bipolaris, Curvularia, Fusarium and Phoma genus, usually, in low incidence. FSS and FP were associated to P. maximum seeds for exportation, with significant incidence in some cases. The results indicated that there was a limiting factor in all producer regions regarding sanitary quality of the seeds.Sementes comerciais de forrageiras tropicais, pertencente a 26 lotes produzidos em diferentes regiões (safras 2004-05 e 2005-06, foram avaliadas quanto à sanidade. Foram analisadas sementes de cultivares de Panicum maximum (Massai, Mombaça e Tanzânia e de estilosantes (Estilosantes Campo Grande – ECG. Adicionalmente, avaliou-se a qualidade sanitária de dois lotes de sementes de P. maximum destinados à exportação. Para isso, as sementes foram submetidas ao teste de papel de filtro em gerbox, os quais foram incubados a 20ºC, com fotoperíodo de 12 h, durante sete dias. Os fungos saprófitos ou

  12. Modelling of word usage frequency dynamics using artificial neural network

    International Nuclear Information System (INIS)

    Maslennikova, Yu S; Bochkarev, V V; Voloskov, D S

    2014-01-01

    In this paper the method for modelling of word usage frequency time series is proposed. An artificial feedforward neural network was used to predict word usage frequencies. The neural network was trained using the maximum likelihood criterion. The Google Books Ngram corpus was used for the analysis. This database provides a large amount of data on frequency of specific word forms for 7 languages. Statistical modelling of word usage frequency time series allows finding optimal fitting and filtering algorithm for subsequent lexicographic analysis and verification of frequency trend models

  13. Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Bo-Hee Choi

    2016-01-01

    Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.

  14. Different Frequencies between Power and Efficiency in Wireless Power Transfer

    OpenAIRE

    Muhammad Afnan, Habibi; Hodaka, Ichijo

    2017-01-01

    Wireless Power Transfer (WPT) has been recognized as a common power transfer method because it transfers electric power without any cable from source to the load. One of the physical principle of WPT is the law of electromagnetic induction, and the WPT system is driven by alternative current power source under specific frequency. The frequency that provides maximum gain between voltages or currents is called resonance frequency. On the other hand, some studies about WPT said that resonance fr...

  15. Biogeochemistry of the MAximum TURbidity Zone of Estuaries (MATURE): some conclusions

    NARCIS (Netherlands)

    Herman, P.M.J.; Heip, C.H.R.

    1999-01-01

    In this paper, we give a short overview of the activities and main results of the MAximum TURbidity Zone of Estuaries (MATURE) project. Three estuaries (Elbe, Schelde and Gironde) have been sampled intensively during a joint 1-week campaign in both 1993 and 1994. We introduce the publicly available

  16. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  17. Nonmonotonic low frequency losses in HTSCs

    International Nuclear Information System (INIS)

    Castro, H; Gerber, A; Milner, A

    2007-01-01

    A calorimetric technique has been used in order to study ac-field dissipation in ceramic BSCCO samples at low frequencies between 0.05 and 250 Hz, at temperatures from 65 to 90 K. In contrast to previous studies, where ac losses have been reported with a linear dependence on magnetic field frequency, we find a nonmonotonic function presenting various maxima. Frequencies corresponding to local maxima of dissipation depend on the temperature and the amplitude of the ac magnetic field. Flux creep is argued to be responsible for this behaviour. A simple model connecting the characteristic vortex relaxation times (flux creep) and the location of dissipation maxima versus frequency is proposed

  18. Installation of the MAXIMUM microscope at the ALS

    International Nuclear Information System (INIS)

    Ng, W.; Perera, R.C.C.; Underwood, J.H.; Singh, S.; Solak, H.; Cerrina, F.

    1995-10-01

    The MAXIMUM scanning x-ray microscope, developed at the Synchrotron Radiation Center (SRC) at the University of Wisconsin, Madison was implemented on the Advanced Light Source in August of 1995. The microscope's initial operation at SRC successfully demonstrated the use of multilayer coated Schwarzschild objective for focusing 130 eV x-rays to a spot size of better than 0.1 micron with an electron energy resolution of 250meV. The performance of the microscope was severely limited, because of the relatively low brightness of SRC, which limits the available flux at the focus of the microscope. The high brightness of the ALS is expected to increase the usable flux at the sample by a factor of 1,000. The authors will report on the installation of the microscope on bending magnet beamline 6.3.2 at the ALS and the initial measurement of optical performance on the new source, and preliminary experiments with surface chemistry of HF etched Si will be described

  19. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  20. Magnetoacoustic measurements on steel samples at low magnetizing frequencies

    Czech Academy of Sciences Publication Activity Database

    Perevertov, Oleksiy; Stupakov, Alexandr

    2015-01-01

    Roč. 66, č. 7 (2015), s. 58-61 ISSN 1335-3632 R&D Projects: GA ČR GA13-18993S; GA ČR GB14-36566G Institutional support: RVO:68378271 Keywords : magneto-acoustic emission * surface magnetic field * steel * magnetic hysteresis Subject RIV: JB - Sensors, Measurment, Regulation Impact factor: 0.407, year: 2015

  1. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  2. PATTERNS OF THE MAXIMUM RAINFALL AMOUNTS REGISTERED IN 24 HOURS WITHIN THE OLTENIA PLAIN

    Directory of Open Access Journals (Sweden)

    ALINA VLĂDUŢ

    2012-03-01

    Full Text Available Patterns of the maximum rainfall amounts registered in 24 hours within the Oltenia Plain. The present study aims at rendering the main features of the maximum rainfall amounts registered in 24 h within the Oltenia Plain. We used 30-year time series (1980-2009 for seven meteorological stations. Generally, the maximum amounts in 24 h display the same pattern as the monthly mean amounts, namely higher values in the interval May-October. In terms of mean values, the highest amounts are registered in the western and northern extremity of the plain. The maximum values generally exceed 70 mm at all meteorological stations: D.T. Severin, 224 mm, July 1999; Slatina, 104.8 mm, August 2002; Caracal, 92.2 m, July 1991; Bechet, 80.8 mm, July 2006; Craiova, 77.6 mm, April 2003. During the cold season, there was noticed a greater uniformity all over the plain, due to the cyclonic origin of rainfalls compared to the warm season, when thermal convection is quite active and it triggers local showers. In order to better emphasize the peculiarities of this parameter, we have calculated the frequency on different value classes (eight classes, as well as the probability of appearance of different amounts. Thus, it resulted that the highest frequency (25-35% is held by the first two classes of values (0-10 mm; 10.1-20 mm. The lowest frequency is registered in case of the amounts of more than 100 mm, which generally display a probability of occurrence of less than 1% and only in the western and eastern extremities of the plain.

  3. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  4. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  5. Comparsion of maximum viscosity and viscometric method for identification of irradiated sweet potato starch

    International Nuclear Information System (INIS)

    Yi, Sang Duk; Yang, Jae Seung

    2000-01-01

    A study was carried out to compare viscosity and maximum viscosity methods for the detection of irradiated sweet potato starch. The viscosity of all samples decreased by increasing stirring speeds and irradiation doses. This trend was similar for maximum viscosity. Regression coefficients and expressions of viscosity and maximum viscosity with increasing irradiation dose were 0.9823 (y=335.02e -0. 3 366x ) at 120 rpm and 0.9939 (y =-42.544x+730.26). This trend in viscosity was similar for all stirring speeds. Parameter A, B and C values showed a dose dependent relation and were a better parameter for detecting irradiation treatment than maximum viscosity and the viscosity value it self. These results suggest that the detection of irradiated sweet potato starch is possible by both the viscometric and maximum visosity method. Therefore, the authors think that the maximum viscosity method can be proposed as one of the new methods to detect the irradiation treatment for sweet potato starch

  6. Analysis on the time and frequency domains of the acceleration in front crawl stroke.

    Science.gov (United States)

    Gil, Joaquín Madera; Moreno, Luis-Millán González; Mahiques, Juan Benavent; Muñoz, Víctor Tella

    2012-05-01

    The swimming involves accelerations and decelerations in the swimmer's body. Thus, the main objective of this study is to make a temporal and frequency analysis of the acceleration in front crawl swimming, regarding the gender and the performance. The sample was composed by 31 male swimmers (15 of high-level and 16 of low-level) and 20 female swimmers (11 of high-level and 9 of low-level). The acceleration was registered from the third complete cycle during eight seconds in a 25 meters maximum velocity test. A position transducer (200Hz) was used to collect the data, and it was synchronized to an aquatic camera (25Hz). The acceleration in the temporal (root mean square, minimum and maximum of the acceleration) and frequency (power peak, power peak frequency and spectral area) domains was calculated with Fourier analysis, as well as the velocity and the spectrums distribution in function to present one or more main peaks (type 1 and type 2). A one-way ANOVA was used to establish differences between gender and performance. Results show differences between genders in all the temporal domain variables (p<0.05) and only the Spectral Area (SA) in the frequency domain (p<0.05). Between gender and performance, only the Root Mean Square (RMS) showed differences in the performance of the male swimmers (p<0.05) and in the higher level swimmers, the Maximum (Max) and the Power Peak (PP) of the acceleration showed differences between both genders (p<0.05). These results confirms the importance of knowing the RMS to determine the efficiency of the swimmers regarding gender and performance level.

  7. Occupational Exposure Assessment of Tehran Metro Drivers to Extremely Low Frequency Magnetic Fields

    Directory of Open Access Journals (Sweden)

    mohammad reza Monazzam

    2016-03-01

    Full Text Available Introduction: Occupational exposure to Extremely Low Frequency Magnetic Fields (ELF-MFs in train drivers is an integral part of the driving task and creates concern about driving jobs. The present study was designed to investigate the occupational exposure of Tehran train drivers to extremely low frequency magnetic fields. Methods: In order to measure the driver’s exposure, from each line, a random sample in AC and DC type trains was selected and measurements were done according to the IEEE std 644-1994 using a triple axis TES-394 device. Train drivers were then compared with national occupational exposure limit guidelines. Results: The maximum and minimum mean exposure was found in AC external city trains (1.2±1.5 μT and DC internal city trains (0.31±0.2 μT, respectively. The maximum and minimum exposure was 9 μT and 0.08 μT in AC trains of line 5, respectively. In the internal train line, maximum and minimum values were 5.4 μT and 0.08 μT in AC trains. Conclusions: In none of the exposure scenarios in different trains, the exposure exceeded the national or international occupational exposure limit guidelines. However, this should not be the basis of safety in these fields

  8. Wideband 4-diode sampling circuit

    Science.gov (United States)

    Wojtulewicz, Andrzej; Radtke, Maciej

    2016-09-01

    The objective of this work was to develop a wide-band sampling circuit. The device should have the ability to collect samples of a very fast signal applied to its input, strengthen it and prepare for further processing. The study emphasizes the method of sampling pulse shaping. The use of ultrafast pulse generator allows sampling signals with a wide frequency spectrum, reaching several gigahertzes. The device uses a pulse transformer to prepare symmetrical pulses. Their final shape is formed with the help of the step recovery diode, two coplanar strips and Schottky diode. Made device can be used in the sampling oscilloscope, as well as other measurement system.

  9. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    Science.gov (United States)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  10. LOW FREQUENCY DAMPER

    Directory of Open Access Journals (Sweden)

    Radu BOGATEANU

    2009-09-01

    Full Text Available The low frequency damper is an autonomous equipment for damping vibrations with the 1-20Hz range.Its autonomy enables the equipment to be located in various mechanical systems, without requiring special hydraulic installations.The low frequency damper was designed for damping the low frequency oscillations occurring in the circuit controls of the upgraded IAR-99 Aircraft.The low frequency damper is a novelty in the aerospace field ,with applicability in several areas as it can be built up in an appropriate range of dimensions meeting the requirements of different beneficiaries. On this line an equipment able to damp an extended frequency range was performed for damping oscillations in the pipes of the nuclear power plants.This damper, tested in INCAS laboratories matched the requirements of the beneficiary.The low frequency damper is patented – the patent no. 114583C1/2000 is held by INCAS.

  11. Frequency Hopping Transceiver Multiplexer

    Science.gov (United States)

    1983-03-01

    ATC 17 ULR IHQ OCLI CPCTR ULTRA HIGH "OQS" UP TO 4X HIGHER THAN BEST INDUS- TRY STANDARD (ATC 100). MICROWAVE POWER, CURRENT. AND 0 RATINGS5...Q"W were assigned to element (FigC-2); which will be modelled into the transformer previously ment td . The center frequencies, "Q", frequency range...of the TD 1288 system. Temperature stability, change with time or storage. Flexure Frequency, or non-linear change over bandwidth. * Humidity

  12. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  13. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Science.gov (United States)

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  14. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  15. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  16. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  17. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  18. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  19. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  20. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...