WorldWideScience

Sample records for maximum sampling frequency

  1. Effect of Training Frequency on Maximum Expiratory Pressure

    Science.gov (United States)

    Anand, Supraja; El-Bashiti, Nour; Sapienza, Christine

    2012-01-01

    Purpose: To determine the effects of expiratory muscle strength training (EMST) frequency on maximum expiratory pressure (MEP). Method: We assigned 12 healthy participants to 2 groups of training frequency (3 days per week and 5 days per week). They completed a 4-week training program on an EMST trainer (Aspire Products, LLC). MEP was the primary…

  2. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  3. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. Sampling frequency affects ActiGraph activity counts

    DEFF Research Database (Denmark)

    Brønd, Jan Christian; Arvidsson, Daniel

    that is normally performed at frequencies higher than 2.5 Hz. With the ActiGraph model GT3X one has the option to select sample frequency from 30 to 100 Hz. This study investigated the effect of the sampling frequency on the ouput of the bandpass filter.Methods: A synthetic frequency sweep of 0-15 Hz was generated...... in Matlab and sampled at frequencies of 30-100 Hz. Also, acceleration signals during indoor walking and running were sampled at 30 Hz using the ActiGraph GT3X and resampled in Matlab to frequencies of 40-100 Hz. All data was processed with the ActiLife software.Results: Acceleration frequencies between 5......-15 Hz escaped the bandpass filter when sampled at 40, 50, 70, 80 and 100 Hz, while this was not the case when sampled at 30, 60 and 90 Hz. During the ambulatory activities this artifact resultet in different activity count output from the ActiLife software with different sampling frequency...

  5. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano

    2012-03-17

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  6. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano; Basili, Roberto; Meroni, Fabrizio; Musacchio, Gemma; Mai, Paul Martin; Valensise, Gianluca

    2012-01-01

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  7. Gravitational Waves and the Maximum Spin Frequency of Neutron Stars

    NARCIS (Netherlands)

    Patruno, A.; Haskell, B.; D'Angelo, C.

    2012-01-01

    In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient

  8. SNP calling, genotype calling, and sample allele frequency estimation from new-generation sequencing data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    We present a statistical framework for estimation and application of sample allele frequency spectra from New-Generation Sequencing (NGS) data. In this method, we first estimate the allele frequency spectrum using maximum likelihood. In contrast to previous methods, the likelihood function is cal...... be extended to various other cases including cases with deviations from Hardy-Weinberg equilibrium. We evaluate the statistical properties of the methods using simulations and by application to a real data set....

  9. Geodesic acoustic eigenmode for tokamak equilibrium with maximum of local GAM frequency

    Energy Technology Data Exchange (ETDEWEB)

    Lakhin, V.P. [NRC “Kurchatov Institute”, Moscow (Russian Federation); Sorokina, E.A., E-mail: sorokina.ekaterina@gmail.com [NRC “Kurchatov Institute”, Moscow (Russian Federation); Peoples' Friendship University of Russia, Moscow (Russian Federation)

    2014-01-24

    The geodesic acoustic eigenmode for tokamak equilibrium with the maximum of local GAM frequency is found analytically in the frame of MHD model. The analysis is based on the asymptotic matching technique.

  10. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    Science.gov (United States)

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  11. The effect of electric field maximum on the Rabi flopping and generated higher frequency spectra

    International Nuclear Information System (INIS)

    Niu Yueping; Cui Ni; Xiang Yang; Li Ruxin; Gong Shangqing; Xu Zhizhan

    2008-01-01

    We investigate the effect of the electric field maximum on the Rabi flopping and the generated higher frequency spectra properties by solving Maxwell-Bloch equations without invoking any standard approximations. It is found that the maximum of the electric field will lead to carrier-wave Rabi flopping (CWRF) through reversion dynamics which will be more evident when the applied field enters the sub-one-cycle regime. Therefore, under the interaction of sub-one-cycle pulses, the Rabi flopping follows the transient electric field tightly through the oscillation and reversion dynamics, which is in contrast to the conventional envelope Rabi flopping. Complete or incomplete population inversion can be realized through the control of the carrier-envelope phase (CEP). Furthermore, the generated higher frequency spectra will be changed from distinct to continuous or irregular with the variation of the CEP. Our results demonstrate that due to the evident maximum behavior of the electric field, pulses with different CEP give rise to different CWRFs, and then different degree of interferences lead to different higher frequency spectral features.

  12. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D; Martins, C.D.A.

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  13. Implications of Microwave Holography Using Minimum Required Frequency Samples for Weakly- and Strongly-Scattering Indications

    Science.gov (United States)

    Fallahpour, M.; Case, J. T.; Kharkovsky, S.; Zoughi, R.

    2010-01-01

    Microwave imaging techniques, an integral component of nondestructive testing and evaluation (NDTE), have received significant attention in the past decade. These techniques have included the implementation of synthetic aperture focusing (SAF) algorithms for obtaining high spatial resolution images. The next important step in these developments is the implementation of 3-D holographic imaging algorithms. These are well-known wideband imaging technique requiring a swept-frequency (i.e., wideband), which unlike SAF that is a single frequency technique, are not easily performed on a real-time basis. This is due to the fact that a significant number of data points (in the frequency domain) must be obtained within the frequency band of interest. This not only makes for a complex imaging system design, it also significantly increases the image-production time. Consequently in an attempt to reduce the measurement time and system complexity, an investigation was conducted to determine the minimum required number of frequency samples needed to image a specific object while preserving a desired maximum measurement range and range resolution. To this end the 3-D holographic algorithm was modified to use properlyinterpolated frequency data. Measurements of the complex reflection coefficient for several samples were conducted using a swept-frequency approach. Subsequently, holographical images were generated using data containing a relatively large number of frequency samples and were compared with images generated by the reduced data set data. Quantitative metrics such as average, contrast, and signal-to-noise ratio were used to evaluate the quality of images generated using reduced data sets. Furthermore, this approach was applied to both weakly- and strongly-scattering indications. This paper presents the methods used and the results of this investigation.

  14. A software sampling frequency adaptive algorithm for reducing spectral leakage

    Institute of Scientific and Technical Information of China (English)

    PAN Li-dong; WANG Fei

    2006-01-01

    Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.

  15. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  16. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  17. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    Science.gov (United States)

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using

  18. Measurement of the Maximum Frequency of Electroglottographic Fluctuations in the Expiration Phase of Volitional Cough as a Functional Test for Cough Efficiency.

    Science.gov (United States)

    Iwahashi, Toshihiko; Ogawa, Makoto; Hosokawa, Kiyohito; Kato, Chieri; Inohara, Hidenori

    2017-10-01

    The hypotheses of the present study were that the maximum frequency of fluctuation of electroglottographic (EGG) signals in the expiration phase of volitional cough (VC) reflects the cough efficiency and that this EGG parameter is affected by impaired laryngeal closure, expiratory effort strength, and gender. For 20 normal healthy adults and 20 patients diagnosed with unilateral vocal fold paralysis (UVFP), each participant was fitted with EGG electrodes on the neck, had a transnasal laryngo-fiberscope inserted, and was asked to perform weak/strong VC tasks while EGG signals and a high-speed digital image of the larynx were recorded. The maximum frequency was calculated in the EGG fluctuation region coinciding with vigorous vocal fold vibration in the laryngeal HSDIs. In addition, each participant underwent spirometry for measurement of three aerodynamic parameters, including peak expiratory air flow (PEAF), during weak/strong VC tasks. Significant differences were found for both maximum EGG frequency and PEAF between the healthy and UVFP groups and between the weak and strong VC tasks. Among the three cough aerodynamic parameters, PEAF showed the highest positive correlation with the maximum EGG frequency. The correlation coefficients between the maximum EGG frequency and PEAF recorded simultaneously were 0.574 for the whole group, and 0.782/0.717/0.823/0.688 for the male/female/male-healthy/male-UVFP subgroups, respectively. Consequently, the maximum EGG frequency measured in the expiration phase of VC was shown to reflect the velocity of expiratory airflow to some extent and was suggested to be affected by vocal fold physical properties, glottal closure condition, and the expiratory function.

  19. Radio Frequency Transistors Using Aligned Semiconducting Carbon Nanotubes with Current-Gain Cutoff Frequency and Maximum Oscillation Frequency Simultaneously Greater than 70 GHz.

    Science.gov (United States)

    Cao, Yu; Brady, Gerald J; Gui, Hui; Rutherglen, Chris; Arnold, Michael S; Zhou, Chongwu

    2016-07-26

    In this paper, we report record radio frequency (RF) performance of carbon nanotube transistors based on combined use of a self-aligned T-shape gate structure, and well-aligned, high-semiconducting-purity, high-density polyfluorene-sorted semiconducting carbon nanotubes, which were deposited using dose-controlled, floating evaporative self-assembly method. These transistors show outstanding direct current (DC) performance with on-current density of 350 μA/μm, transconductance as high as 310 μS/μm, and superior current saturation with normalized output resistance greater than 100 kΩ·μm. These transistors create a record as carbon nanotube RF transistors that demonstrate both the current-gain cutoff frequency (ft) and the maximum oscillation frequency (fmax) greater than 70 GHz. Furthermore, these transistors exhibit good linearity performance with 1 dB gain compression point (P1dB) of 14 dBm and input third-order intercept point (IIP3) of 22 dBm. Our study advances state-of-the-art of carbon nanotube RF electronics, which have the potential to be made flexible and may find broad applications for signal amplification, wireless communication, and wearable/flexible electronics.

  20. Evaluating Annual Maximum and Partial Duration Series for Estimating Frequency of Small Magnitude Floods

    Directory of Open Access Journals (Sweden)

    Fazlul Karim

    2017-06-01

    Full Text Available Understanding the nature of frequent floods is important for characterising channel morphology, riparian and aquatic habitat, and informing river restoration efforts. This paper presents results from an analysis on frequency estimates of low magnitude floods using the annual maximum and partial series data compared to actual flood series. Five frequency distribution models were fitted to data from 24 gauging stations in the Great Barrier Reef (GBR lagoon catchments in north-eastern Australia. Based on the goodness of fit test, Generalised Extreme Value, Generalised Pareto and Log Pearson Type 3 models were used to estimate flood frequencies across the study region. Results suggest frequency estimates based on a partial series are better, compared to an annual series, for small to medium floods, while both methods produce similar results for large floods. Although both methods converge at a higher recurrence interval, the convergence recurrence interval varies between catchments. Results also suggest frequency estimates vary slightly between two or more partial series, depending on flood threshold, and the differences are large for the catchments that experience less frequent floods. While a partial series produces better frequency estimates, it can underestimate or overestimate the frequency if the flood threshold differs largely compared to bankfull discharge. These results have significant implications in calculating the dependency of floodplain ecosystems on the frequency of flooding and their subsequent management.

  1. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  2. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng

    2016-11-14

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.

  3. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  4. Effects of different strength training frequencies on maximum strength, body composition and functional capacity in healthy older individuals.

    Science.gov (United States)

    Turpela, Mari; Häkkinen, Keijo; Haff, Guy Gregory; Walker, Simon

    2017-11-01

    There is controversy in the literature regarding the dose-response relationship of strength training in healthy older participants. The present study determined training frequency effects on maximum strength, muscle mass and functional capacity over 6months following an initial 3-month preparatory strength training period. One-hundred and six 64-75year old volunteers were randomly assigned to one of four groups; performing strength training one (EX1), two (EX2), or three (EX3) times per week and a non-training control (CON) group. Whole-body strength training was performed using 2-5 sets and 4-12 repetitions per exercise and 7-9 exercises per session. Before and after the intervention, maximum dynamic leg press (1-RM) and isometric knee extensor and plantarflexor strength, body composition and quadriceps cross-sectional area, as well as functional capacity (maximum 7.5m forward and backward walking speed, timed-up-and-go test, loaded 10-stair climb test) were measured. All experimental groups increased leg press 1-RM more than CON (EX1: 3±8%, EX2: 6±6%, EX3: 10±8%, CON: -3±6%, Ptraining frequency would induce greater benefit to maximum walking speed (i.e. functional capacity) despite a clear dose-response in dynamic 1-RM strength, at least when predominantly using machine weight-training. It appears that beneficial functional capacity improvements can be achieved through low frequency training (i.e. 1-2 times per week) in previously untrained healthy older participants. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  6. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola

    2016-01-01

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected

  7. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  8. Photovoltaic High-Frequency Pulse Charger for Lead-Acid Battery under Maximum Power Point Tracking

    Directory of Open Access Journals (Sweden)

    Hung-I. Hsieh

    2013-01-01

    Full Text Available A photovoltaic pulse charger (PV-PC using high-frequency pulse train for charging lead-acid battery (LAB is proposed not only to explore the charging behavior with maximum power point tracking (MPPT but also to delay sulfating crystallization on the electrode pores of the LAB to prolong the battery life, which is achieved due to a brief pulse break between adjacent pulses that refreshes the discharging of LAB. Maximum energy transfer between the PV module and a boost current converter (BCC is modeled to maximize the charging energy for LAB under different solar insolation. A duty control, guided by a power-increment-aided incremental-conductance MPPT (PI-INC MPPT, is implemented to the BCC that operates at maximum power point (MPP against the random insolation. A 250 W PV-PC system for charging a four-in-series LAB (48 Vdc is examined. The charging behavior of the PV-PC system in comparison with that of CC-CV charger is studied. Four scenarios of charging statuses of PV-BC system under different solar insolation changes are investigated and compared with that using INC MPPT.

  9. MalHaploFreq: A computer programme for estimating malaria haplotype frequencies from blood samples

    Directory of Open Access Journals (Sweden)

    Smith Thomas A

    2008-07-01

    Full Text Available Abstract Background Molecular markers, particularly those associated with drug resistance, are important surveillance tools that can inform policy choice. People infected with falciparum malaria often contain several genetically-distinct clones of the parasite; genotyping the patients' blood reveals whether or not the marker is present (i.e. its prevalence, but does not reveal its frequency. For example a person with four malaria clones may contain both mutant and wildtype forms of a marker but it is not possible to distinguish the relative frequencies of the mutant and wildtypes i.e. 1:3, 2:2 or 3:1. Methods An appropriate method for obtaining frequencies from prevalence data is by Maximum Likelihood analysis. A computer programme has been developed that allows the frequency of markers, and haplotypes defined by up to three codons, to be estimated from blood phenotype data. Results The programme has been fully documented [see Additional File 1] and provided with a user-friendly interface suitable for large scale analyses. It returns accurate frequencies and 95% confidence intervals from simulated dataset sets and has been extensively tested on field data sets. Additional File 1 User manual for MalHaploFreq. Click here for file Conclusion The programme is included [see Additional File 2] and/or may be freely downloaded from 1. It can then be used to extract molecular marker and haplotype frequencies from their prevalence in human blood samples. This should enhance the use of frequency data to inform antimalarial drug policy choice. Additional File 2 executable programme compiled for use on DOS or windows Click here for file

  10. Measures of maximum magnetic field in 3 GHz radio frequency superconducting cavities; Mesures du gradient accelerateur maximum dans des cavites supraconductrices en regime impulsionnel a 3 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Catherine [Paris-11 Univ., 91 Orsay (France)

    2000-01-19

    Theoretical models have shown that the maximum magnetic field in radio frequency superconducting cavities is the superheating field H{sub sh}. For niobium, H{sub sh} is 25 - 30% higher than the thermodynamical H{sub c} field: H{sub sh} within (240 - 274) mT. However, the maximum magnetic field observed so far is in the range H{sub c,max} = 152 mT for the best 1.3 GHz Nb cavities. This field is lower than the critical field H{sub c1} above which the superconductor breaks up into divided normal and superconducting zones (H{sub c1}{<=}H{sub c}). Thermal instabilities are responsible for this low value. In order to reach H{sub sh} before thermal breakdown, high power short pulses are used. The cavity needs then to be strongly over-coupled. The dedicated test bed has been built from the collaboration between Istituto Nazionale di Fisica Nucleare (INFN) - Sezione di Genoa, and the Service d'Etudes et Realisation d'Accelerateurs (SERA) of Laboratoire de l'Accelerateur Lineaire (LAL). The maximum magnetic field, H{sub rf,max}, measurements on INFN cavities give lower results than the theoretical speculations and are in agreement with previous results. The superheating magnetic fields is linked to the magnetic penetration depth. This superconducting characteristic length can be used to determine the quality of niobium through the ratio between the resistivity measured at 300 K and 4.2 K in the normal conducting state (RRR). Results have been compared to previous ones and agree pretty well. They show that the RRR measured on cavities is superficial and lower than the RRR measured on samples which concerns the volume. (author)

  11. Impact of sampling frequency in the analysis of tropospheric ozone observations

    Directory of Open Access Journals (Sweden)

    M. Saunois

    2012-08-01

    Full Text Available Measurements of ozone vertical profiles are valuable for the evaluation of atmospheric chemistry models and contribute to the understanding of the processes controlling the distribution of tropospheric ozone. The longest record of ozone vertical profiles is provided by ozone sondes, which have a typical frequency of 4 to 12 profiles a month. Here we quantify the uncertainty introduced by low frequency sampling in the determination of means and trends. To do this, the high frequency MOZAIC (Measurements of OZone, water vapor, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft profiles over airports, such as Frankfurt, have been subsampled at two typical ozone sonde frequencies of 4 and 12 profiles per month. We found the lowest sampling uncertainty on seasonal means at 700 hPa over Frankfurt, with around 5% for a frequency of 12 profiles per month and 10% for a 4 profile-a-month frequency. However the uncertainty can reach up to 15 and 29% at the lowest altitude levels. As a consequence, the sampling uncertainty at the lowest frequency could be higher than the typical 10% accuracy of the ozone sondes and should be carefully considered for observation comparison and model evaluation. We found that the 95% confidence limit on the seasonal mean derived from the subsample created is similar to the sampling uncertainty and suggest to use it as an estimate of the sampling uncertainty. Similar results are found at six other Northern Hemisphere sites. We show that the sampling substantially impacts on the inter-annual variability and the trend derived over the period 1998–2008 both in magnitude and in sign throughout the troposphere. Also, a tropical case is discussed using the MOZAIC profiles taken over Windhoek, Namibia between 2005 and 2008. For this site, we found that the sampling uncertainty in the free troposphere is around 8 and 12% at 12 and 4 profiles a month respectively.

  12. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  13. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  14. Frequency Mixing Magnetic Detection Scanner for Imaging Magnetic Particles in Planar Samples.

    Science.gov (United States)

    Hong, Hyobong; Lim, Eul-Gyoon; Jeong, Jae-Chan; Chang, Jiho; Shin, Sung-Woong; Krause, Hans-Joachim

    2016-06-09

    The setup of a planar Frequency Mixing Magnetic Detection (p-FMMD) scanner for performing Magnetic Particles Imaging (MPI) of flat samples is presented. It consists of two magnetic measurement heads on both sides of the sample mounted on the legs of a u-shaped support. The sample is locally exposed to a magnetic excitation field consisting of two distinct frequencies, a stronger component at about 77 kHz and a weaker field at 61 Hz. The nonlinear magnetization characteristics of superparamagnetic particles give rise to the generation of intermodulation products. A selected sum-frequency component of the high and low frequency magnetic field incident on the magnetically nonlinear particles is recorded by a demodulation electronics. In contrast to a conventional MPI scanner, p-FMMD does not require the application of a strong magnetic field to the whole sample because mixing of the two frequencies occurs locally. Thus, the lateral dimensions of the sample are just limited by the scanning range and the supports. However, the sample height determines the spatial resolution. In the current setup it is limited to 2 mm. As examples, we present two 20 mm × 25 mm p-FMMD images acquired from samples with 1 µm diameter maghemite particles in silanol matrix and with 50 nm magnetite particles in aminosilane matrix. The results show that the novel MPI scanner can be applied for analysis of thin biological samples and for medical diagnostic purposes.

  15. Frequency Response of the Sample Vibration Mode in Scanning Probe Acoustic Microscope

    International Nuclear Information System (INIS)

    Ya-Jun, Zhao; Qian, Cheng; Meng-Lu, Qian

    2010-01-01

    Based on the interaction mechanism between tip and sample in the contact mode of a scanning probe acoustic microscope (SPAM), an active mass of the sample is introduced in the mass-spring model. The tip motion and frequency response of the sample vibration mode in the SPAM are calculated by the Lagrange equation with dissipation function. For the silicon tip and glass assemblage in the SPAM the frequency response is simulated and it is in agreement with the experimental result. The living myoblast cells on the glass slide are imaged at resonance frequencies of the SPAM system, which are 20kHz, 30kHz and 120kHz. It is shown that good contrast of SPAM images could be obtained when the system is operated at the resonance frequencies of the system in high and low-frequency regions

  16. Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael

    2016-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.

  17. Dental anthropology of a Brazilian sample: Frequency of nonmetric traits.

    Science.gov (United States)

    Tinoco, Rachel Lima Ribeiro; Lima, Laíse Nascimento Correia; Delwing, Fábio; Francesquini, Luiz; Daruge, Eduardo

    2016-01-01

    Dental elements are valuable tools in a study of ancient populations and species, and key-features for human identification; among the dental anthropology field, nonmetric traits, standardized by ASUDAS, are closely related to ancestry. This study aimed to analyze the frequency of six nonmetric traits in a sample from Southeast Brazil, composed by 130 dental casts from individuals aged between 18 and 30, without foreign parents or grandparents. A single examiner observed the presence or absence of shoveling, Carabelli's cusp, fifth cusp, 3-cusped UM2, sixth cusp, and 4-cusped LM2. The frequencies obtained were different from the ones shown by other researches to Amerindian and South American samples, and related to European and sub-Saharan frequencies, showing the influence of this groups in the current Brazilian population. Sexual dimorphism was found in the frequencies of Carabelli's cusp, 3-cusped UM2, and sixth cusp. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  19. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  20. [Polish guidelines of 2001 for maximum admissible intensities in high frequency EMF versus European Union recommendations].

    Science.gov (United States)

    Aniołczyk, Halina

    2003-01-01

    In 1999, a draft of amendments to maximum admissible intensities (MAI) of electromagnetic fields (0 Hz-300 GHz) was prepared by Professor H. Korniewicz of the Central Institute for Labour Protection, Warsaw, in cooperation with the Nofer Institute of Occupational Medicine, Łódź (radio- and microwaves) and the Military Institute of Hygiene and Epidemiology, Warsaw (pulse radiation). Before 2000, the development of the national MAI guidelines for the frequency range of 0.1 MHz-300 GHz was based on the knowledge of biological and health effects of EMF exposure available on the turn of the 1960s. A current basis for establishing the MAI international standards is a well-documented thermal effect measured by the value of a specific absorption rate (SAR), whereas the effects of resonant absorption imposes the nature of the functional dependency on EMF frequency. The Russian standards, already thoroughly analyzed, still take so-called non-thermal effects and the conception of energetic load for a work-shift with its progressive averaging (see hazardous zone in Polish guidelines) as a basis for setting maximum admissible intensities. The World Health Organization recommends a harmonization of the EMF protection guidelines, existing in different countries, with the guidelines of the International Commission for Non-Ionizing Radiation Protection (ICNIRP), and its position is supported by the European Union.

  1. Evaluation of the Frequency for Gas Sampling for the High Burnup Confirmatory Data Project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marschman, Steven C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-05-01

    This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations.

  2. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  3. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  4. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  5. Symbol synchronization and sampling frequency synchronization techniques in real-time DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Cao, Zizheng; Tang, Jin; Chen, Lin; Wu, Xian

    2014-09-01

    In this paper, we propose and experimentally demonstrate a symbol synchronization and sampling frequency synchronization techniques in real-time direct-detection optical orthogonal frequency division multiplexing (DDO-OFDM) system, over 100-km standard single mode fiber (SSMF) using a cost-effective directly modulated distributed feedback (DFB) laser. The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm. Meanwhile, the proposed pilot-assisted sampling frequency synchronization between digital-to-analog converter (DAC) and analog-to-digital converter (ADC) is capable of estimating SFOs with an accuracy of technique can also compensate SFO effects within a small residual SFO caused by deviation of SFO estimation and low-precision or unstable clock source. The two synchronization techniques are suitable for high-speed DDO-OFDM transmission systems.

  6. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  7. Gate-Recessed AlGaN/GaN MOSHEMTs with the Maximum Oscillation Frequency Exceeding 120 GHz on Sapphire Substrates

    International Nuclear Information System (INIS)

    Kong Xin; Wei Ke; Liu Guo-Guo; Liu Xin-Yu

    2012-01-01

    Gate-recessed AlGaN/GaN metal-oxide-semiconductor high electron mobility transistors (MOSHEMTs) on sapphire substrates are fabricated. The devices with a gate length of 160 nm and a gate periphery of 2 × 75 μm exhibit two orders of magnitude reduction in gate leakage current and enhanced off-state breakdown characteristics, compared with conventional HEMTs. Furthermore, the extrinsic transconductance of an MOSHEMT is 237.2 mS/mm, only 7% lower than that of Schottky-gate HEMT. An extrinsic current gain cutoff frequency f T of 65 GHz and a maximum oscillation frequency f max of 123 GHz are deduced from rf small signal measurements. The high f max demonstrates that gate-recessed MOSHEMTs are of great potential in millimeter wave frequencies. (cross-disciplinary physics and related areas of science and technology)

  8. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    Science.gov (United States)

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  9. Effect of Sampling Frequency for Real-Time Tablet Coating Monitoring Using Near Infrared Spectroscopy.

    Science.gov (United States)

    Igne, Benoît; Arai, Hiroaki; Drennen, James K; Anderson, Carl A

    2016-09-01

    While the sampling of pharmaceutical products typically follows well-defined protocols, the parameterization of spectroscopic methods and their associated sampling frequency is not standard. Whereas, for blending, the sampling frequency is limited by the nature of the process, in other processes, such as tablet film coating, practitioners must determine the best approach to collecting spectral data. The present article studied how sampling practices affected the interpretation of the results provided by a near-infrared spectroscopy method for the monitoring of tablet moisture and coating weight gain during a pan-coating experiment. Several coating runs were monitored with different sampling frequencies (with or without co-adds (also known as sub-samples)) and with spectral averaging corresponding to processing cycles (1 to 15 pan rotations). Beyond integrating the sensor into the equipment, the present work demonstrated that it is necessary to have a good sense of the underlying phenomena that have the potential to affect the quality of the signal. The effects of co-adds and averaging was significant with respect to the quality of the spectral data. However, the type of output obtained from a sampling method dictated the type of information that one can gain on the dynamics of a process. Thus, different sampling frequencies may be needed at different stages of process development. © The Author(s) 2016.

  10. Draft evaluation of the frequency for gas sampling for the high burnup confirmatory data project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-26

    This report fulfills the M3 milestone M3FT-15SN0802041, “Draft Evaluation of the Frequency for Gas Sampling for the High Burn-up Storage Demonstration Project” under Work Package FT-15SN080204, “ST Field Demonstration Support – SNL”. This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations. Gas sampling will provide information on the presence of residual water (and byproducts associated with its reactions and decomposition) and breach of cladding, which could inform the decision of when to open the project cask.

  11. Sampling frequency of ciliated protozoan microfauna for seasonal distribution research in marine ecosystems.

    Science.gov (United States)

    Xu, Henglong; Yong, Jiang; Xu, Guangjian

    2015-12-30

    Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    Science.gov (United States)

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    Science.gov (United States)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for

  14. The T-lock: automated compensation of radio-frequency induced sample heating

    International Nuclear Information System (INIS)

    Hiller, Sebastian; Arthanari, Haribabu; Wagner, Gerhard

    2009-01-01

    Modern high-field NMR spectrometers can stabilize the nominal sample temperature at a precision of less than 0.1 K. However, the actual sample temperature may differ from the nominal value by several degrees because the sample heating caused by high-power radio frequency pulses is not readily detected by the temperature sensors. Without correction, transfer of chemical shifts between different experiments causes problems in the data analysis. In principle, the temperature differences can be corrected by manual procedures but this is cumbersome and not fully reliable. Here, we introduce the concept of a 'T-lock', which automatically maintains the sample at the same reference temperature over the course of different NMR experiments. The T-lock works by continuously measuring the resonance frequency of a suitable spin and simultaneously adjusting the temperature control, thus locking the sample temperature at the reference value. For three different nuclei, 13 C, 17 O and 31 P in the compounds alanine, water, and phosphate, respectively, the T-lock accuracy was found to be <0.1 K. The use of dummy scan periods with variable lengths allows a reliable establishment of the thermal equilibrium before the acquisition of an experiment starts

  15. Distortions in frequency spectra of signals associated with sampling-pulse shapes

    International Nuclear Information System (INIS)

    Njau, E.C.

    1983-04-01

    A method developed earlier by the author [IC/82/44; IC/82/45] is used to investigate distortions introduced into frequency spectra of signals by the shapes of the sampling pulses involved. Conditions are established under which the use of trapezoid or exponentially-edged pulses to digitize signals can make the frequency spectra of the resultant data samples devoid of the main features of the signals. This observation does not, however, apply in any way to cosinusoidally-edged pulses or to pulses with cosine-squared edges. Since parts of the Earth's surface and atmosphere receive direct solar energy in discrete samples (i.e. only from sunrise to sunset) we have extended the technique and attempted to develop a theory that explains the observed solar terrestrial relationships. A very good agreement is obtained between the theory and previous long-term and short-term observations. (author)

  16. A new algorithm for a high-modulation frequency and high-speed digital lock-in amplifier

    International Nuclear Information System (INIS)

    Jiang, G L; Yang, H; Li, R; Kong, P

    2016-01-01

    To increase the maximum modulation frequency of the digital lock-in amplifier in an online system, we propose a new algorithm using a square wave reference whose frequency is an odd sub-multiple of the modulation frequency, which is based on odd harmonic components in the square wave reference. The sampling frequency is four times the modulation frequency to insure the orthogonality of reference sequences. Only additions and subtractions are used to implement phase-sensitive detection, which speeds up the computation in lock-in. Furthermore, the maximum modulation frequency of a lock-in is enhanced considerably. The feasibility of this new algorithm is tested by simulation and experiments. (paper)

  17. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  18. Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan

    2016-07-01

    Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return

  19. Variable Sampling Composite Observer Based Frequency Locked Loop and its Application in Grid Connected System

    Directory of Open Access Journals (Sweden)

    ARUN, K.

    2016-05-01

    Full Text Available A modified digital signal processing procedure is described for the on-line estimation of DC, fundamental and harmonics of periodic signal. A frequency locked loop (FLL incorporated within the parallel structure of observers is proposed to accommodate a wide range of frequency drift. The error in frequency generated under drifting frequencies has been used for changing the sampling frequency of the composite observer, so that the number of samples per cycle of the periodic waveform remains constant. A standard coupled oscillator with automatic gain control is used as numerically controlled oscillator (NCO to generate the enabling pulses for the digital observer. The NCO gives an integer multiple of the fundamental frequency making it suitable for power quality applications. Another observer with DC and second harmonic blocks in the feedback path act as filter and reduces the double frequency content. A systematic study of the FLL is done and a method has been proposed to design the controller. The performance of FLL is validated through simulation and experimental studies. To illustrate applications of the new FLL, estimation of individual harmonics from nonlinear load and the design of a variable sampling resonant controller, for a single phase grid-connected inverter have been presented.

  20. Effect of current on the maximum possible reward.

    Science.gov (United States)

    Gallistel, C R; Leon, M; Waraczynski, M; Hanau, M S

    1991-12-01

    Using a 2-lever choice paradigm with concurrent variable interval schedules of reward, it was found that when pulse frequency is increased, the preference-determining rewarding effect of 0.5-s trains of brief cathodal pulses delivered to the medial forebrain bundle of the rat saturates (stops increasing) at values ranging from 200 to 631 pulses/s (pps). Raising the current lowered the saturation frequency, which confirms earlier, more extensive findings showing that the rewarding effect of short trains saturates at pulse frequencies that vary from less than 100 pps to more than 800 pps, depending on the current. It was also found that the maximum possible reward--the magnitude of the reward at or beyond the saturation pulse frequency--increases with increasing current. Thus, increasing the current reduces the saturation frequency but increases the subjective magnitude of the maximum possible reward.

  1. Estimating an appropriate sampling frequency for monitoring ground water well contamination

    International Nuclear Information System (INIS)

    Tuckfield, R.C.

    1994-01-01

    Nearly 1,500 ground water wells at the Savannah River Site (SRS) are sampled quarterly to monitor contamination by radionuclides and other hazardous constituents from nearby waste sites. Some 10,000 water samples were collected in 1993 at a laboratory analysis cost of $10,000,000. No widely accepted statistical method has been developed, to date, for estimating a technically defensible ground water sampling frequency consistent and compliant with federal regulations. Such a method is presented here based on the concept of statistical independence among successively measured contaminant concentrations in time

  2. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  3. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  4. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  5. The Importance of Pressure Sampling Frequency in Models for Determination of Critical Wave Loadingson Monolithic Structures

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke; Meinert, Palle

    2008-01-01

    This paper discusses the influence of wave load sampling frequency on calculated sliding distance in an overall stability analysis of a monolithic caisson. It is demonstrated by a specific example of caisson design that for this kind of analyses the sampling frequency in a small scale model could...... be as low as 100 Hz in model scale. However, for design of structure elements like the wave wall on the top of a caisson the wave load sampling frequency must be much higher, in the order of 1000 Hz in the model. Elastic-plastic deformations of foundation and structure were not included in the analysis....

  6. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  7. Reducing the sampling frequency of groundwater monitoring wells

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, V.M.; Ridley, M.N. [Lawrence Livermore National Lab., CA (United States); Tuckfield, R.C.; Anderson, R.A. [Westinghouse, Savannah River Co., Aiken, SC (United States)

    1996-01-01

    As part of a joint LLNL/SRTC project, a methodology for selecting sampling frequencies is evolving that introduces statistical thinking and cost effectiveness into the sampling schedule selection practices now commonly employed on environmental projects. Our current emphasis is on descriptive rather than inferential statistics. Environmental monitoring data are inherently messy, being plagued by such problems as extremely high variability and left-censoring. As a result, real data often fail to meet the assumptions required for the appropriate application of many statistical methods. Rather than abandon the quantitative approach in these cases, however, the methodology employs simple statistical techniques to bring a measure of objectivity and reproducibility to the process. The techniques are applied within the framework of decision logic, which inrerprets the numerical results from the standpoint of chemistry-related professional judgment and the regulatory context. This paper presents the methodology`s basic concepts together with early implementation results, showing the estimated cost savings. 6 refs., 3 figs.

  8. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  9. Duel frequency echo data acquisition system for sea-floor classification

    Digital Repository Service at National Institute of Oceanography (India)

    Navelkar, G.S.; Desai, R.G.P.; Chakraborty, B.

    An echo data acquisition system is designed to digitize echo signal from a single beam shipboard echo-sounder for use in sea-floor classification studies using a 12 bit analog to digital (A/D) card with a maximum sampling frequency of 1 MHz. Both 33...

  10. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  11. Using high-frequency sampling to detect effects of atmospheric pollutants on stream chemistry

    Science.gov (United States)

    Stephen D. Sebestyen; James B. Shanley; Elizabeth W. Boyer

    2009-01-01

    We combined information from long-term (weekly over many years) and short-term (high-frequency during rainfall and snowmelt events) stream water sampling efforts to understand how atmospheric deposition affects stream chemistry. Water samples were collected at the Sleepers River Research Watershed, VT, a temperate upland forest site that receives elevated atmospheric...

  12. Frequency-Modulated Continuous Flow Analysis Electrospray Ionization Mass Spectrometry (FM-CFA-ESI-MS) for Sample Multiplexing.

    Science.gov (United States)

    Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G

    2018-02-20

    A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.

  13. An extension of command shaping methods for controlling residual vibration using frequency sampling

    Science.gov (United States)

    Singer, Neil C.; Seering, Warren P.

    1992-01-01

    The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.

  14. Split Hopkinson Resonant Bar Test for Sonic-Frequency Acoustic Velocity and Attenuation Measurements of Small, Isotropic Geologic Samples

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S.

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.

  15. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  16. A Frequency Matching Method: Solving Inverse Problems by Use of Geologically Realistic Prior Information

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Cordua, Knud Skou

    2012-01-01

    The frequency matching method defines a closed form expression for a complex prior that quantifies the higher order statistics of a proposed solution model to an inverse problem. While existing solution methods to inverse problems are capable of sampling the solution space while taking into account...... arbitrarily complex a priori information defined by sample algorithms, it is not possible to directly compute the maximum a posteriori model, as the prior probability of a solution model cannot be expressed. We demonstrate how the frequency matching method enables us to compute the maximum a posteriori...... solution model to an inverse problem by using a priori information based on multiple point statistics learned from training images. We demonstrate the applicability of the suggested method on a synthetic tomographic crosshole inverse problem....

  17. Variable frequency iteration MPPT for resonant power converters

    Science.gov (United States)

    Zhang, Qian; Bataresh, Issa; Chen, Lin

    2015-06-30

    A method of maximum power point tracking (MPPT) uses an MPPT algorithm to determine a switching frequency for a resonant power converter, including initializing by setting an initial boundary frequency range that is divided into initial frequency sub-ranges bounded by initial frequencies including an initial center frequency and first and second initial bounding frequencies. A first iteration includes measuring initial powers at the initial frequencies to determine a maximum power initial frequency that is used to set a first reduced frequency search range centered or bounded by the maximum power initial frequency including at least a first additional bounding frequency. A second iteration includes calculating first and second center frequencies by averaging adjacent frequent values in the first reduced frequency search range and measuring second power values at the first and second center frequencies. The switching frequency is determined from measured power values including the second power values.

  18. Evaluation of maximum voided volume in Korean children by use of a 48-h frequency volume chart.

    Science.gov (United States)

    Kim, Sun-Ouck; Kim, Kyung Do; Kim, Young Sig; Kim, Jun Mo; Moon, Du Geon; Park, Sungchan; Lee, Sang Don; Chung, Jae Min; Cho, Won Yeol

    2012-08-01

    Study Type - Diagnostic (validating cohort). Level of Evidence 2a. What's known on the subject? and What does the study add? The relationship between the maximum voided volume followed a linear curve. The formula presented, bladder capacity (mL) = 12 ×[age (years) + 11], is thought to be a reasonable one for Korean children. Korean children have a smaller bladder capacity than that reported in previous Western studies. • To develop practical guidelines for the prediction of normal bladder capacity in Korean children measured by a frequency volume chart (FVC), maximum voided volume (MVV) is an important factor in the diagnosis of children with abnormal voiding function. • In all, 298 children, aged 3-13 years, with no history of voiding disorders volunteered for the study. The MVV was determined in 219 subjects by use of a completely recorded FVC. • Linear regression analysis was used to define the exact relationship between age and bladder capacity. An approximate formula related age to bladder capacity as follows: bladder capacity (mL) = 12 ×[age (years) + 11]. • The relationship between the MVV measured by a FVC by age (3-13 years) of Korean children followed a linear curve. • When applied to normal voiding patterns, the formula presented appears to be a reasonable one for Korean children. © 2011 BJU INTERNATIONAL.

  19. FREQUENCY OF ANEUPLOID SPERMATOZOA STUDIED BY MULTICOLOR FISH IN SERIAL SEMEN SAMPLES

    Science.gov (United States)

    Frequency of aneuploid spermatozoa studied by multicolor FISH in serial semen samplesM. Vozdova1, S. D. Perreault2, O. Rezacova1, D. Zudova1 , Z. Zudova3, S. G. Selevan4, J. Rubes1,51Veterinary Research Institute, Brno, Czech Republic; 2U.S. Environmental Protection A...

  20. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  1. Fiber optics frequency comb enabled linear optical sampling with operation wavelength range extension.

    Science.gov (United States)

    Liao, Ruolin; Wu, Zhichao; Fu, Songnian; Zhu, Shengnan; Yu, Zhe; Tang, Ming; Liu, Deming

    2018-02-01

    Although the linear optical sampling (LOS) technique is powerful enough to characterize various advanced modulation formats with high symbol rates, the central wavelength of a pulsed local oscillator (LO) needs to be carefully set according to that of the signal under test, due to the coherent mixing operation. Here, we experimentally demonstrate wideband LOS enabled by a fiber optics frequency comb (FOFC). Meanwhile, when the broadband FOFC acts as the pulsed LO, we propose a scheme to mitigate the enhanced sampling error arising in the non-ideal response of a balanced photodetector. Finally, precise characterizations of arbitrary 128 Gbps PDM-QPSK wavelength channels from 1550 to 1570 nm are successfully achieved, when a 101.3 MHz frequency spaced comb with a 3 dB spectral power ripple of 20 nm is used.

  2. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  3. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  4. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  5. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  6. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  7. Measuring saccade peak velocity using a low-frequency sampling rate of 50 Hz.

    Science.gov (United States)

    Wierts, Roel; Janssen, Maurice J A; Kingma, Herman

    2008-12-01

    During the last decades, small head-mounted video eye trackers have been developed in order to record eye movements. Real-time systems-with a low sampling frequency of 50/60 Hz-are used for clinical vestibular practice, but are generally considered not to be suited for measuring fast eye movements. In this paper, it is shown that saccadic eye movements, having an amplitude of at least 5 degrees, can, in good approximation, be considered to be bandwidth limited up to a frequency of 25-30 Hz. Using the Nyquist theorem to reconstruct saccadic eye movement signals at higher temporal resolutions, it is shown that accurate values for saccade peak velocities, recorded at 50 Hz, can be obtained, but saccade peak accelerations and decelerations cannot. In conclusion, video eye trackers sampling at 50/60 Hz are appropriate for detecting the clinical relevant saccade peak velocities in contrast to what has been stated up till now.

  8. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  9. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    Science.gov (United States)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  10. Influence of sampling frequency and load calculation methods on quantification of annual river nutrient and suspended solids loads.

    Science.gov (United States)

    Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey

    2018-01-11

    Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise

  11. Note: Radio frequency surface impedance characterization system for superconducting samples at 7.5 GHz.

    Science.gov (United States)

    Xiao, B P; Reece, C E; Phillips, H L; Geng, R L; Wang, H; Marhauser, F; Kelley, M J

    2011-05-01

    A radio frequency (RF) surface impedance characterization (SIC) system that uses a novel sapphire-loaded niobium cavity operating at 7.5 GHz has been developed as a tool to measure the RF surface impedance of flat superconducting material samples. The SIC system can presently make direct calorimetric RF surface impedance measurements on the central 0.8 cm(2) area of 5 cm diameter disk samples from 2 to 20 K exposed to RF magnetic fields up to 14 mT. To illustrate system utility, we present first measurement results for a bulk niobium sample.

  12. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    Science.gov (United States)

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  13. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  14. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. On the Berry-Esséen bound of frequency polygons for ϕ-mixing samples.

    Science.gov (United States)

    Huang, Gan-Ji; Xing, Guodong

    2017-01-01

    Under some mild assumptions, the Berry-Esséen bound of frequency polygons for ϕ -mixing samples is presented. By the bound derived, we obtain the corresponding convergence rate of uniformly asymptotic normality, which is nearly [Formula: see text] under the given conditions.

  16. Frequency dependence of p-mode frequency shifts induced by magnetic activity in Kepler solar-like stars

    Science.gov (United States)

    Salabert, D.; Régulo, C.; Pérez Hernández, F.; García, R. A.

    2018-04-01

    The variations of the frequencies of the low-degree acoustic oscillations in the Sun induced by magnetic activity show a dependence on radial order. The frequency shifts are observed to increase towards higher-order modes to reach a maximum of about 0.8 μHz over the 11-yr solar cycle. A comparable frequency dependence is also measured in two other main sequence solar-like stars, the F-star HD 49933, and the young 1 Gyr-old solar analog KIC 10644253, although with different amplitudes of the shifts of about 2 μHz and 0.5 μHz, respectively. Our objective here is to extend this analysis to stars with different masses, metallicities, and evolutionary stages. From an initial set of 87 Kepler solar-like oscillating stars with known individual p-mode frequencies, we identify five stars showing frequency shifts that can be considered reliable using selection criteria based on Monte Carlo simulations and on the photospheric magnetic activity proxy Sph. The frequency dependence of the frequency shifts of four of these stars could be measured for the l = 0 and l = 1 modes individually. Given the quality of the data, the results could indicate that a physical source of perturbation different from that in the Sun is dominating in this sample of solar-like stars.

  17. An increased rectal maximum tolerable volume and long anal canal are associated with poor short-term response to biofeedback therapy for patients with anismus with decreased bowel frequency and normal colonic transit time.

    Science.gov (United States)

    Rhee, P L; Choi, M S; Kim, Y H; Son, H J; Kim, J J; Koh, K C; Paik, S W; Rhee, J C; Choi, K W

    2000-10-01

    Biofeedback is an effective therapy for a majority of patients with anismus. However, a significant proportion of patients still failed to respond to biofeedback, and little has been known about the factors that predict response to biofeedback. We evaluated the factors associated with poor response to biofeedback. Biofeedback therapy was offered to 45 patients with anismus with decreased bowel frequency (less than three times per week) and normal colonic transit time. Any differences in demographics, symptoms, and parameters of anorectal physiologic tests were sought between responders (in whom bowel frequency increased up to three times or more per week after biofeedback) and nonresponders (in whom bowel frequency remained less than three times per week). Thirty-one patients (68.9 percent) responded to biofeedback and 14 patients (31.1 percent) did not. Anal canal length was longer in nonresponders than in responders (4.53 +/- 0.5 vs. 4.08 +/- 0.56 cm; P = 0.02), and rectal maximum tolerable volume was larger in nonresponders than in responders. (361 +/- 87 vs. 302 +/- 69 ml; P = 0.02). Anal canal length and rectal maximum tolerable volume showed significant differences between responders and nonresponders on multivariate analysis (P = 0.027 and P = 0.034, respectively). This study showed that a long anal canal and increased rectal maximum tolerable volume are associated with poor short-term response to biofeedback for patients with anismus with decreased bowel frequency and normal colonic transit time.

  18. Implementation of PLL and FLL trackers for signals with high harmonic content and low sampling frequency

    DEFF Research Database (Denmark)

    Mathe, Laszlo; Iov, Florin; Sera, Dezso

    2014-01-01

    The accurate tracking of phase, frequency, and amplitude of different frequency components from a measured signal is an essential requirement for many digitally controlled equipment. The accurate and robust tracking of a frequency component from a complex signal was successfully applied for example...... in: grid connected inverters, sensorless motor control for rotor position estimation, grid voltage monitoring for ac-dc converters etc. Usually, the design of such trackers is done in continuous time domain. The discretization introduces errors which change the performance, especially when the input...... signal is rich in harmonics and the sampling frequency is close to the tracked frequency component. In this paper different discretization methods and implementation issues, such as Tustin, Backward-Forward Euler, are discussed and compared. A special case is analyzed, when the input signal is reach...

  19. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  20. Comparison of mobile and stationary spore-sampling techniques for estimating virulence frequencies in aerial barley powdery mildew populations

    DEFF Research Database (Denmark)

    Hovmøller, M.S.; Munk, L.; Østergård, Hanne

    1995-01-01

    Gene frequencies in samples of aerial populations of barley powdery mildew (Erysiphe graminis f.sp. hordei), which were collected in adjacent barley areas and in successive periods of time, were compared using mobile and stationary sampling techniques. Stationary samples were collected from trap ...

  1. Acoustic Imaging Frequency Dynamics of Ferroelectric Domains by Atomic Force Microscopy

    International Nuclear Information System (INIS)

    Kun-Yu, Zhao; Hua-Rong, Zeng; Hong-Zhang, Song; Sen-Xing, Hui; Guo-Rong, Li; Qing-Rui, Yin; Shimamura, Kiyoshi; Kannan, Chinna Venkadasamy; Villora, Encarnacion Antonia Garcia; Takekawa, Shunji; Kitamura, Kenji

    2008-01-01

    We report the acoustic imaging frequency dynamics of ferroelectric domains by low-frequency acoustic probe microscopy based on the commercial atomic force microscopy It is found that ferroelectric domain could be firstly visualized at lower frequency down to 0.5 kHz by AFM-based acoustic microscopy The frequency-dependent acoustic signal revealed a strong acoustic response in the frequency range from 7kHz to 10kHz, and reached maximum at 8.1kHz. The acoustic contrast mechanism can be ascribed to the different elastic response of ferroelectric microstructures to local elastic stress fields, which is induced by the acoustic wave transmitting in the sample when the piezoelectric transducer is vibrating and exciting acoustic wave under ac electric fields due to normal piezoelectric effects. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  2. Efficient computation of the joint sample frequency spectra for multiple populations.

    Science.gov (United States)

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  3. Speed Estimation in Geared Wind Turbines Using the Maximum Correlation Coefficient

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Marhadi, Kun S.; Jensen, Bogi Bech

    2015-01-01

    to overcome the above mentioned issues. The high speed stage shaft angular velocity is calculated based on the maximum correlation coefficient between the 1 st gear mesh frequency of the last gearbox stage and a pure sinus tone of known frequency and phase. The proposed algorithm utilizes vibration signals...

  4. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  5. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  6. Finite Sample Comparison of Parametric, Semiparametric, and Wavelet Estimators of Fractional Integration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ø.; Frederiksen, Per Houmann

    2005-01-01

    In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...

  7. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  8. Etching of Niobium Sample Placed on Superconducting Radio Frequency Cavity Surface in Ar/CL2 Plasma

    International Nuclear Information System (INIS)

    Upadhyay, Janardan; Phillips, Larry; Valente, Anne-Marie

    2011-01-01

    Plasma based surface modification is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. It has been proven with flat samples that the bulk Niobium (Nb) removal rate and the surface roughness after the plasma etchings are equal to or better than wet etching processes. To optimize the plasma parameters, we are using a single cell cavity with 20 sample holders symmetrically distributed over the cell. These holders serve the purpose of diagnostic ports for the measurement of the plasma parameters and for the holding of the Nb sample to be etched. The plasma properties at RF (100 MHz) and MW (2.45 GHz) frequencies are being measured with the help of electrical and optical probes at different pressures and RF power levels inside of this cavity. The niobium coupons placed on several holders around the cell are being etched simultaneously. The etching results will be presented at this conference.

  9. Etching of Niobium Sample Placed on Superconducting Radio Frequency Cavity Surface in Ar/CL2 Plasma

    Energy Technology Data Exchange (ETDEWEB)

    Janardan Upadhyay, Larry Phillips, Anne-Marie Valente

    2011-09-01

    Plasma based surface modification is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. It has been proven with flat samples that the bulk Niobium (Nb) removal rate and the surface roughness after the plasma etchings are equal to or better than wet etching processes. To optimize the plasma parameters, we are using a single cell cavity with 20 sample holders symmetrically distributed over the cell. These holders serve the purpose of diagnostic ports for the measurement of the plasma parameters and for the holding of the Nb sample to be etched. The plasma properties at RF (100 MHz) and MW (2.45 GHz) frequencies are being measured with the help of electrical and optical probes at different pressures and RF power levels inside of this cavity. The niobium coupons placed on several holders around the cell are being etched simultaneously. The etching results will be presented at this conference.

  10. Influence of modulation frequency in rubidium cell frequency standards

    Science.gov (United States)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  11. Frequency, Antimicrobial Resistance and Genetic Diversity of Klebsiella pneumoniae in Food Samples.

    Directory of Open Access Journals (Sweden)

    Yumei Guo

    Full Text Available This study aimed to assess the frequency of Klebsiella pneumoniae in food samples and to detect antibiotic resistance phenotypes, antimicrobial resistance genes and the molecular subtypes of the recovered isolates. A total of 998 food samples were collected, and 99 (9.9% K. pneumoniae strains were isolated; the frequencies were 8.2% (4/49 in fresh raw seafood, 13.8% (26/188 in fresh raw chicken, 11.4% (34/297 in frozen raw food and 7.5% (35/464 in cooked food samples. Antimicrobial resistance was observed against 16 antimicrobials. The highest resistance rate was observed for ampicillin (92.3%, followed by tetracycline (31.3%, trimethoprim-sulfamethoxazole (18.2%, and chloramphenicol (10.1%. Two K. pneumoniae strains were identified as extended-spectrum β-lactamase (ESBL-one strain had three beta-lactamases genes (blaSHV, blaCTX-M-1, and blaCTX-M-10 and one had only the blaSHV gene. Nineteen multidrug-resistant (MDR strains were detected; the percentage of MDR strains in fresh raw chicken samples was significantly higher than in other sample types (P<0.05. Six of the 18 trimethoprim-sulfamethoxazole-resistant strains carried the folate pathway inhibitor gene (dhfr. Four isolates were screened by PCR for quinolone resistance genes; aac(6'-Ib-cr, qnrB, qnrA and qnrS were detected. In addition, gyrA gene mutations such as T247A (Ser83Ile, C248T (Ser83Phe, and A260C (Asp87Ala and a parC C240T (Ser80Ile mutation were identified. Five isolates were screened for aminoglycosides resistance genes; aacA4, aacC2, and aadA1 were detected. Pulsed-field gel electrophoresis-based subtyping identified 91 different patterns. Our results indicate that food, especially fresh raw chicken, is a reservoir of antimicrobial-resistant K. pneumoniae, and the potential health risks posed by such strains should not be underestimated. Our results demonstrated high prevalence, antibiotic resistance rate and genetic diversity of K. pneumoniae in food in China. Improved

  12. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  13. Flux pinning characteristics in cylindrical niobium samples used for superconducting radio frequency cavity fabrication

    Science.gov (United States)

    Dhavale, Asavari S.; Dhakal, Pashupati; Polyanskii, Anatolii A.; Ciovati, Gianluigi

    2012-06-01

    We present the results from DC magnetization and penetration depth measurements of cylindrical bulk large-grain (LG) and fine-grain (FG) niobium samples used for the fabrication of superconducting radio frequency (SRF) cavities. The surface treatment consisted of electropolishing and low-temperature baking as they are typically applied to SRF cavities. The magnetization data are analyzed using a modified critical state model. The critical current density Jc and pinning force Fp are calculated from the magnetization data and their temperature dependence and field dependence are presented. The LG samples have lower critical current density and pinning force density compared to FG samples, favorable to lower flux trapping efficiency. This effect may explain the lower values of residual resistance often observed in LG cavities than FG cavities.

  14. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  15. Releasable activity and maximum permissible leakage rate within a transport cask of Tehran Research Reactor fuel samples

    Directory of Open Access Journals (Sweden)

    Rezaeian Mahdi

    2015-01-01

    Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.

  16. Frequency position modulation using multi-spectral projections

    Science.gov (United States)

    Goodman, Joel; Bertoncini, Crystal; Moore, Michael; Nousain, Bryan; Cowart, Gregory

    2012-10-01

    In this paper we present an approach to harness multi-spectral projections (MSPs) to carefully shape and locate tones in the spectrum, enabling a new and robust modulation in which a signal's discrete frequency support is used to represent symbols. This method, called Frequency Position Modulation (FPM), is an innovative extension to MT-FSK and OFDM and can be non-uniformly spread over many GHz of instantaneous bandwidth (IBW), resulting in a communications system that is difficult to intercept and jam. The FPM symbols are recovered using adaptive projections that in part employ an analog polynomial nonlinearity paired with an analog-to-digital converter (ADC) sampling at a rate at that is only a fraction of the IBW of the signal. MSPs also facilitate using commercial of-the-shelf (COTS) ADCs with uniform-sampling, standing in sharp contrast to random linear projections by random sampling, which requires a full Nyquist rate sample-and-hold. Our novel communication system concept provides an order of magnitude improvement in processing gain over conventional LPI/LPD communications (e.g., FH- or DS-CDMA) and facilitates the ability to operate in interference laden environments where conventional compressed sensing receivers would fail. We quantitatively analyze the bit error rate (BER) and processing gain (PG) for a maximum likelihood based FPM demodulator and demonstrate its performance in interference laden conditions.

  17. Surface Characterization of Nb Samples Electro-polished Together With Real Superconducting Radio-frequency Accelerator Cavities

    International Nuclear Information System (INIS)

    Zhao, Xin; Geng, Rong-Li; Tyagi, P.V.; Hayano, Hitoshi; Kato, Shigeki; Nishiwaki, Michiru; Saeki, Takayuki; Sawabe, Motoaki

    2010-01-01

    We report the results of surface characterizations of niobium (Nb) samples electropolished together with a single cell superconducting radio-frequency accelerator cavity. These witness samples were located in three regions of the cavity, namely at the equator, the iris and the beam-pipe. Auger electron spectroscopy (AES) was utilized to probe the chemical composition of the topmost four atomic layers. Scanning electron microscopy with energy dispersive X-ray for elemental analysis (SEM/EDX) was used to observe the surface topography and chemical composition at the micrometer scale. A few atomic layers of sulfur (S) were found covering the samples non-uniformly. Niobium oxide granules with a sharp geometry were observed on every sample. Some Nb-O granules appeared to also contain sulfur.

  18. Maximum power gains of radio-frequency-driven two-energy-component tokamak reactors

    International Nuclear Information System (INIS)

    Jassby, D.L.

    1974-11-01

    Two-energy-component fusion reactors in which the suprathermal component (D) is produced by harmonic cyclotron ''runaway'' of resonant ions are considered. In one ideal case, the fast hydromagnetic wave at ω = 2ω/sub cD/ produces an energy distribution f(W) approximately constant (up to W/sub max/) that includes all deuterons, which then thermalize and react with the cold tritons. In another ideal case, f(W) approximately constant is maintained by the fast wave at ω = ω/sub cD/. If one neglects (1) direct rf input to the bulk-plasma electrons and tritons, and (2) the fact that many deuterons are not resonantly accelerated, then the maximum ideal power gain is about 0.85 Q/sub m/ in the first case and 1.05 Q/sub m/ in the second case, where Q/sub m/ is the maximum fusion gain in the beam-injection scheme (e.g., Q/sub m/ = 1.9 at T/sub e/ = 10 keV). Because of nonideal effects, the cyclotron runaway phenomenon may find its most practical use in the heating of 50:50 D--T plasmas to ignition. (auth)

  19. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  20. GHz band frequency hopping PLL-based frequency synthesizers

    Institute of Scientific and Technical Information of China (English)

    XU Yong; WANG Zhi-gong; GUAN Yu; XU Zhi-jun; QIAO Lu-feng

    2005-01-01

    In this paper we describe a full-integrated circuit containing all building blocks of a completed PLL-based synthesizer except for low pass filter(LPF).The frequency synthesizer is designed for a frequency hopping (FH) transceiver operating up to 1.5 GHz as a local oscillator. The architecture of Voltage Controlled Oscillator (VCO) is optimized to get better performance, and a phase noise of -111.85-dBc/Hz @ 1 MHz and a tuning range of 250 MHz are gained at a centre frequency of 1.35 GHz.A novel Dual-Modulus Prescaler(DMP) is designed to achieve a very low jitter and a lower power.The settling time of PLL is 80 μs while the reference frequency is 400 KHz.This monolithic frequency synthesizer is to integrate all main building blocks of PLL except for the low pass filter,with a maximum VCO output frequency of 1.5 GHz,and is fabricated with a 0.18 μm mixed signal CMOS process. Low power dissipation, low phase noise, large tuning range and fast settling time are gained in this design.

  1. Flux pinning characteristics in cylindrical niobium samples used for superconducting radio frequency cavity fabrication

    International Nuclear Information System (INIS)

    Dhavale, Asavari S; Dhakal, Pashupati; Ciovati, Gianluigi; Polyanskii, Anatolii A

    2012-01-01

    We present the results from DC magnetization and penetration depth measurements of cylindrical bulk large-grain (LG) and fine-grain (FG) niobium samples used for the fabrication of superconducting radio frequency (SRF) cavities. The surface treatment consisted of electropolishing and low-temperature baking as they are typically applied to SRF cavities. The magnetization data are analyzed using a modified critical state model. The critical current density J c and pinning force F p are calculated from the magnetization data and their temperature dependence and field dependence are presented. The LG samples have lower critical current density and pinning force density compared to FG samples, favorable to lower flux trapping efficiency. This effect may explain the lower values of residual resistance often observed in LG cavities than FG cavities. (paper)

  2. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  3. A New High Frequency Injection Method Based on Duty Cycle Shifting without Maximum Voltage Magnitude Loss

    DEFF Research Database (Denmark)

    Wang, Dong; Lu, Kaiyuan; Rasmussen, Peter Omand

    2015-01-01

    The conventional high frequency signal injection method is to superimpose a high frequency voltage signal to the commanded stator voltage before space vector modulation. Therefore, the magnitude of the voltage used for machine torque production is limited. In this paper, a new high frequency...... amplitude. This may be utilized to develop new position estimation algorithm without involving the inductance in the medium to high speed range. As an application example, a developed inductance independent position estimation algorithm using the proposed high frequency injection method is applied to drive...... injection method, in which high frequency signal is generated by shifting the duty cycle between two neighboring switching periods, is proposed. This method allows injecting a high frequency signal at half of the switching frequency without the necessity to sacrifice the machine fundamental voltage...

  4. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  5. Maximum Available Accuracy of FM-CW Radars

    Directory of Open Access Journals (Sweden)

    V. Ricny

    2009-12-01

    Full Text Available This article deals with the principles and above all with the maximum available measuring accuracy analyse of FM-CW (Frequency Modulated Continuous Wave radars, which are usually employed for distance and velocity measurements of moving objects in road traffic, as well as air traffic and in other applications. These radars often form an important part of the active safety equipment of high-end cars – the so-called anticollision systems. They usually work in the frequency bands of mm waves (24, 35, 77 GHz. Function principles and analyses of factors, that dominantly influence the distance measurement accuracy of these equipments especially in the modulation and demodulation part, are shown in the paper.

  6. Frequency of single nucleotide polymorphisms of some immune response genes in a population sample from São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Léa Campos de Oliveira

    2011-09-01

    Full Text Available Objective: To present the frequency of single nucleotide polymorphismsof a few immune response genes in a population sample from SãoPaulo City (SP, Brazil. Methods: Data on allele frequencies ofknown polymorphisms of innate and acquired immunity genes werepresented, the majority with proven impact on gene function. Datawere gathered from a sample of healthy individuals, non-HLA identicalsiblings of bone marrow transplant recipients from the Hospital dasClínicas da Faculdade de Medicina da Universidade de São Paulo,obtained between 1998 and 2005. The number of samples variedfor each single nucleotide polymorphism analyzed by polymerasechain reaction followed by restriction enzyme cleavage. Results:Allele and genotype distribution of 41 different gene polymorphisms,mostly cytokines, but also including other immune response genes,were presented. Conclusion: We believe that the data presentedhere can be of great value for case-control studies, to define whichpolymorphisms are present in biologically relevant frequencies and toassess targets for therapeutic intervention in polygenic diseases witha component of immune and inflammatory responses.

  7. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  8. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    Science.gov (United States)

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  9. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    Science.gov (United States)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  10. High frequency monitoring of pesticides in runoff water to improve understanding of their transport and environmental impacts.

    Science.gov (United States)

    Lefrancq, Marie; Jadas-Hécart, Alain; La Jeunesse, Isabelle; Landry, David; Payraudeau, Sylvain

    2017-06-01

    Rainfall-induced peaks in pesticide concentrations can occur rapidly. Low frequency sampling may therefore largely underestimate maximum pesticide concentrations and fluxes. Detailed storm-based sampling of pesticide concentrations in runoff water to better predict pesticide sources, transport pathways and toxicity within the headwater catchments is lacking. High frequency monitoring (2min) of seven pesticides (Dimetomorph, Fluopicolide, Glyphosate, Iprovalicarb, Tebuconazole, Tetraconazole and Triadimenol) and one degradation product (AMPA) were assessed for 20 runoff events from 2009 to 2012 at the outlet of a vineyard catchment in the Layon catchment in France. The maximum pesticide concentrations were 387μgL -1 . Samples from all of the runoff events exceeded the legal limit of 0.1μgL -1 for at least one pesticide (European directive 2013/39/EC). High resolution sampling used to detect the peak pesticide levels revealed that Toxic Units (TU) for algae, invertebrates and fish often exceeded the European Uniform principles (25%). The point and average (time or discharge-weighted) concentrations indicated up to a 30- or 4-fold underestimation of the TU obtained when measuring the maximum concentrations, respectively. This highlights the important role of sampling methods for assessing peak exposure. High resolution sampling combined with concentration-discharge hysteresis analyses revealed that clockwise responses were predominant (52%), indicating that Hortonian runoff is the prevailing surface runoff trigger mechanism in the study catchment. The hysteresis patterns for suspended solids and pesticides were highly dynamic and storm- and chemical-dependent. Intense rainfall events induced stronger C-Q hysteresis (magnitude). This study provides new insights into the complexity of pesticide dynamics in runoff water and highlights the ability of hysteresis analysis to improve understanding of pesticide supply and transport. Copyright © 2017 Elsevier B.V. All

  11. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  12. Experimental Determination of Operating and Maximum Power Transfer Efficiencies at Resonant Frequency in a Wireless Power Transfer System using PP Network Topology with Top Coupling

    Science.gov (United States)

    Ramachandran, Hema; Pillai, K. P. P.; Bindu, G. R.

    2017-08-01

    A two-port network model for a wireless power transfer system taking into account the distributed capacitances using PP network topology with top coupling is developed in this work. The operating and maximum power transfer efficiencies are determined analytically in terms of S-parameters. The system performance predicted by the model is verified with an experiment consisting of a high power home light load of 230 V, 100 W and is tested for two forced resonant frequencies namely, 600 kHz and 1.2 MHz. The experimental results are in close agreement with the proposed model.

  13. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  14. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  15. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  16. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  17. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  18. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    Science.gov (United States)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  19. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  20. High frequency of parvovirus B19 DNA in bone marrow samples from rheumatic patients

    DEFF Research Database (Denmark)

    Lundqvist, Anders; Isa, Adiba; Tolfvenstam, Thomas

    2005-01-01

    BACKGROUND: Human parvovirus B19 (B19) polymerase chain reaction (PCR) is now a routine analysis and serves as a diagnostic marker as well as a complement or alternative to B19 serology. The clinical significance of a positive B19 DNA finding is however dependent on the type of tissue or body fluid...... analysed and of the immune status of the patient. OBJECTIVES: To analyse the clinical significance of B19 DNA positivity in bone marrow samples from rheumatic patients. STUDY DESIGN: Parvovirus B19 DNA was analysed in paired bone marrow and serum samples by nested PCR technique. Serum was also analysed...... negative group. A high frequency of parvovirus B19 DNA was thus detected in bone marrow samples in rheumatic patients. The clinical data does not support a direct association between B19 PCR positivity and rheumatic disease manifestation. Therefore, the clinical significance of B19 DNA positivity in bone...

  1. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  2. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  3. A One ppm NDIR Methane Gas Sensor with Single Frequency Filter Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Binqing Jiang

    2012-09-01

    Full Text Available A non-dispersive infrared (NDIR methane gas sensor prototype has achieved a minimum detection limit of 1 parts per million by volume (ppm. The central idea of the design of the sensor is to decrease the detection limit by increasing the signal to noise ratio (SNR of the system. In order to decrease the noise level, a single frequency filter algorithm based on fast Fourier transform (FFT is adopted for signal processing. Through simulation and experiment, it is found that the full width at half maximum (FWHM of the filter narrows with the extension of sampling period and the increase of lamp modulation frequency, and at some optimum sampling period and modulation frequency, the filtered signal maintains a noise to signal ratio of below 1/10,000. The sensor prototype provides the key techniques for a hand-held methane detector that has a low cost and a high resolution. Such a detector may facilitate the detection of leakage of city natural gas pipelines buried underground, the monitoring of landfill gas, the monitoring of air quality and so on.

  4. Automatic frequency control system for driving a linear accelerator

    International Nuclear Information System (INIS)

    Helgesson, A.L.

    1976-01-01

    An automatic frequency control system is described for maintaining the drive frequency applied to a linear accelerator to produce maximum particle output from the accelerator. The particle output amplitude is measured and the frequency of the radio frequency source powering the linear accelerator is adjusted to maximize particle output amplitude

  5. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  6. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  7. Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling

    Directory of Open Access Journals (Sweden)

    Hyun-Joo Oh

    2017-01-01

    Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.

  8. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  9. Identification of homogeneous regions for rainfall regional frequency analysis considering typhoon event in South Korea

    Science.gov (United States)

    Heo, J. H.; Ahn, H.; Kjeldsen, T. R.

    2017-12-01

    South Korea is prone to large, and often disastrous, rainfall events caused by a mixture of monsoon and typhoon rainfall phenomena. However, traditionally, regional frequency analysis models did not consider this mixture of phenomena when fitting probability distributions, potentially underestimating the risk posed by the more extreme typhoon events. Using long-term observed records of extreme rainfall from 56 sites combined with detailed information on the timing and spatial impact of past typhoons from the Korea Meteorological Administration (KMA), this study developed and tested a new mixture model for frequency analysis of two different phenomena; events occurring regularly every year (monsoon) and events only occurring in some years (typhoon). The available annual maximum 24 hour rainfall data were divided into two sub-samples corresponding to years where the annual maximum is from either (1) a typhoon event, or (2) a non-typhoon event. Then, three-parameter GEV distribution was fitted to each sub-sample along with a weighting parameter characterizing the proportion of historical events associated with typhoon events. Spatial patterns of model parameters were analyzed and showed that typhoon events are less commonly associated with annual maximum rainfall in the North-West part of the country (Seoul area), and more prevalent in the southern and eastern parts of the country, leading to the formation of two distinct typhoon regions: (1) North-West; and (2) Southern and Eastern. Using a leave-one-out procedure, a new regional frequency model was tested and compared to a more traditional index flood method. The results showed that the impact of typhoon on design events might previously have been underestimated in the Seoul area. This suggests that the use of the mixture model should be preferred where the typhoon phenomena is less frequent, and thus can have a significant effect on the rainfall-frequency curve. This research was supported by a grant(2017-MPSS31

  10. Occupational Exposure Assessment of Tehran Metro Drivers to Extremely Low Frequency Magnetic Fields

    Directory of Open Access Journals (Sweden)

    mohammad reza Monazzam

    2016-03-01

    Full Text Available Introduction: Occupational exposure to Extremely Low Frequency Magnetic Fields (ELF-MFs in train drivers is an integral part of the driving task and creates concern about driving jobs. The present study was designed to investigate the occupational exposure of Tehran train drivers to extremely low frequency magnetic fields. Methods: In order to measure the driver’s exposure, from each line, a random sample in AC and DC type trains was selected and measurements were done according to the IEEE std 644-1994 using a triple axis TES-394 device. Train drivers were then compared with national occupational exposure limit guidelines. Results: The maximum and minimum mean exposure was found in AC external city trains (1.2±1.5 μT and DC internal city trains (0.31±0.2 μT, respectively. The maximum and minimum exposure was 9 μT and 0.08 μT in AC trains of line 5, respectively. In the internal train line, maximum and minimum values were 5.4 μT and 0.08 μT in AC trains. Conclusions: In none of the exposure scenarios in different trains, the exposure exceeded the national or international occupational exposure limit guidelines. However, this should not be the basis of safety in these fields

  11. A Comparative Frequency Analysis of Maximum Daily Rainfall for a SE Asian Region under Current and Future Climate Conditions

    Directory of Open Access Journals (Sweden)

    Velautham Daksiya

    2017-01-01

    Full Text Available The impact of changing climate on the frequency of daily rainfall extremes in Jakarta, Indonesia, is analysed and quantified. The study used three different models to assess the changes in rainfall characteristics. The first method involves the use of the weather generator LARS-WG to quantify changes between historical and future daily rainfall maxima. The second approach consists of statistically downscaling general circulation model (GCM output based on historical empirical relationships between GCM output and station rainfall. Lastly, the study employed recent statistically downscaled global gridded rainfall projections to characterize climate change impact rainfall structure. Both annual and seasonal rainfall extremes are studied. The results show significant changes in annual maximum daily rainfall, with an average increase as high as 20% in the 100-year return period daily rainfall. The uncertainty arising from the use of different GCMs was found to be much larger than the uncertainty from the emission scenarios. Furthermore, the annual and wet seasonal analyses exhibit similar behaviors with increased future rainfall, but the dry season is not consistent across the models. The GCM uncertainty is larger in the dry season compared to annual and wet season.

  12. Population genetics inference for longitudinally-sampled mutants under strong selection.

    Science.gov (United States)

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  13. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  14. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  15. Frequency of Aggressive Behaviors in a Nationally Representative Sample of Iranian Children and Adolescents: The CASPIAN-IV Study.

    Science.gov (United States)

    Sadinejad, Morteza; Bahreynian, Maryam; Motlagh, Mohammad-Esmaeil; Qorbani, Mostafa; Movahhed, Mohsen; Ardalan, Gelayol; Heshmat, Ramin; Kelishadi, Roya

    2015-01-01

    This study aims to explore the frequency of aggressive behaviors among a nationally representative sample of Iranian children and adolescents. This nationwide study was performed on a multi-stage sample of 6-18 years students, living in 30 provinces in Iran. Students were asked to confidentially report the frequency of aggressive behaviors including physical fighting, bullying and being bullied in the previous 12 months, using the questionnaire of the World Health Organization Global School Health Survey. In this cross-sectional study, 13,486 students completed the study (90.6% participation rate); they consisted of 49.2% girls and 75.6% urban residents. The mean age of participants was 12.47 years (95% confidence interval: 12.29, 12.65). In total, physical fight was more prevalent among boys than girls (48% vs. 31%, P bulling to other classmates had a higher frequency among boys compared to girls (29% vs. 25%, P bulling to others). Physical fighting was more prevalent among rural residents (40% vs. 39%, respectively, P = 0.61), while being bullied was more common among urban students (27% vs. 26%, respectively, P = 0.69). Although in this study the frequency of aggressive behaviors was lower than many other populations, still these findings emphasize on the importance of designing preventive interventions that target the students, especially in early adolescence, and to increase their awareness toward aggressive behaviors. Implications for future research and aggression prevention programming are recommended.

  16. Evaluation of the Problem Behavior Frequency Scale-Teacher Report Form for Assessing Behavior in a Sample of Urban Adolescents.

    Science.gov (United States)

    Farrell, Albert D; Goncy, Elizabeth A; Sullivan, Terri N; Thompson, Erin L

    2018-02-01

    This study evaluated the structure and validity of the Problem Behavior Frequency Scale-Teacher Report Form (PBFS-TR) for assessing students' frequency of specific forms of aggression and victimization, and positive behavior. Analyses were conducted on two waves of data from 727 students from two urban middle schools (Sample 1) who were rated by their teachers on the PBFS-TR and the Social Skills Improvement System (SSIS), and on data collected from 1,740 students from three urban middle schools (Sample 2) for whom data on both the teacher and student report version of the PBFS were obtained. Confirmatory factor analyses supported first-order factors representing 3 forms of aggression (physical, verbal, and relational), 3 forms of victimization (physical, verbal and relational), and 2 forms of positive behavior (prosocial behavior and effective nonviolent behavior), and higher-order factors representing aggression, victimization, and positive behavior. Strong measurement invariance was established over gender, grade, intervention condition, and time. Support for convergent validity was found based on correlations between corresponding scales on the PBFS-TR and teacher ratings on the SSIS in Sample 1. Significant correlations were also found between teacher ratings on the PBFS-TR and student ratings of their behavior on the Problem Behavior Frequency Scale-Adolescent Report (PBFS-AR) and a measure of nonviolent behavioral intentions in Sample 2. Overall the findings provided support for the PBFS-TR and suggested that teachers can provide useful data on students' aggressive and prosocial behavior and victimization experiences within the school setting. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Analysis on the time and frequency domains of the acceleration in front crawl stroke.

    Science.gov (United States)

    Gil, Joaquín Madera; Moreno, Luis-Millán González; Mahiques, Juan Benavent; Muñoz, Víctor Tella

    2012-05-01

    The swimming involves accelerations and decelerations in the swimmer's body. Thus, the main objective of this study is to make a temporal and frequency analysis of the acceleration in front crawl swimming, regarding the gender and the performance. The sample was composed by 31 male swimmers (15 of high-level and 16 of low-level) and 20 female swimmers (11 of high-level and 9 of low-level). The acceleration was registered from the third complete cycle during eight seconds in a 25 meters maximum velocity test. A position transducer (200Hz) was used to collect the data, and it was synchronized to an aquatic camera (25Hz). The acceleration in the temporal (root mean square, minimum and maximum of the acceleration) and frequency (power peak, power peak frequency and spectral area) domains was calculated with Fourier analysis, as well as the velocity and the spectrums distribution in function to present one or more main peaks (type 1 and type 2). A one-way ANOVA was used to establish differences between gender and performance. Results show differences between genders in all the temporal domain variables (p<0.05) and only the Spectral Area (SA) in the frequency domain (p<0.05). Between gender and performance, only the Root Mean Square (RMS) showed differences in the performance of the male swimmers (p<0.05) and in the higher level swimmers, the Maximum (Max) and the Power Peak (PP) of the acceleration showed differences between both genders (p<0.05). These results confirms the importance of knowing the RMS to determine the efficiency of the swimmers regarding gender and performance level.

  18. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

  19. Frequency Up-Converted Low Frequency Vibration Energy Harvester Using Trampoline Effect

    International Nuclear Information System (INIS)

    Ju, S; Chae, S H; Choi, Y; Jun, S; Park, S M; Lee, S; Ji, C-H; Lee, H W

    2013-01-01

    This paper presents a non-resonant vibration energy harvester based on magnetoelectric transduction mechanism and mechanical frequency up-conversion using trampoline effect. The harvester utilizes a freely movable spherical permanent magnet which bounces off the aluminum springs integrated at both ends of the cavity, achieving frequency up-conversion from low frequency input vibration. Moreover, bonding method of magnetoelectric laminate composite has been optimized to provide higher strain to piezoelectric material and thus obtain a higher output voltage. A proof-of-concept energy harvesting device has been fabricated and tested. Maximum open-circuit voltage of 11.2V has been obtained and output power of 0.57μW has been achieved for a 50kΩ load, when the fabricated energy harvester was hand-shaken

  20. Frequency Up-Converted Low Frequency Vibration Energy Harvester Using Trampoline Effect

    Science.gov (United States)

    Ju, S.; Chae, S. H.; Choi, Y.; Jun, S.; Park, S. M.; Lee, S.; Lee, H. W.; Ji, C.-H.

    2013-12-01

    This paper presents a non-resonant vibration energy harvester based on magnetoelectric transduction mechanism and mechanical frequency up-conversion using trampoline effect. The harvester utilizes a freely movable spherical permanent magnet which bounces off the aluminum springs integrated at both ends of the cavity, achieving frequency up-conversion from low frequency input vibration. Moreover, bonding method of magnetoelectric laminate composite has been optimized to provide higher strain to piezoelectric material and thus obtain a higher output voltage. A proof-of-concept energy harvesting device has been fabricated and tested. Maximum open-circuit voltage of 11.2V has been obtained and output power of 0.57μW has been achieved for a 50kΩ load, when the fabricated energy harvester was hand-shaken.

  1. High-frequency, long-duration water sampling in acid mine drainage studies: a short review of current methods and recent advances in automated water samplers

    Science.gov (United States)

    Chapin, Thomas

    2015-01-01

    Hand-collected grab samples are the most common water sampling method but using grab sampling to monitor temporally variable aquatic processes such as diel metal cycling or episodic events is rarely feasible or cost-effective. Currently available automated samplers are a proven, widely used technology and typically collect up to 24 samples during a deployment. However, these automated samplers are not well suited for long-term sampling in remote areas or in freezing conditions. There is a critical need for low-cost, long-duration, high-frequency water sampling technology to improve our understanding of the geochemical response to temporally variable processes. This review article will examine recent developments in automated water sampler technology and utilize selected field data from acid mine drainage studies to illustrate the utility of high-frequency, long-duration water sampling.

  2. Noise and physical limits to maximum resolution of PET images

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2007-10-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.

  3. Noise and physical limits to maximum resolution of PET images

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2007-01-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners

  4. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  5. Investigation on maximum transition temperature of phonon mediated superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Fusui, L; Yi, S; Yinlong, S [Physics Department, Beijing University (CN)

    1989-05-01

    Three model effective phonon spectra are proposed to get plots of {ital T}{sub {ital c}}-{omega} adn {lambda}-{omega}. It can be concluded that there is no maximum limit of {ital T}{sub {ital c}} in phonon mediated superconductivity for reasonable values of {lambda}. The importance of high frequency LO phonon is also emphasized. Some discussions on high {ital T}{sub {ital c}} are given.

  6. Surface analyses of electropolished niobium samples for superconducting radio frequency cavity

    International Nuclear Information System (INIS)

    Tyagi, P. V.; Nishiwaki, M.; Saeki, T.; Sawabe, M.; Hayano, H.; Noguchi, T.; Kato, S.

    2010-01-01

    The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb 2 O 5 ) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

  7. Enhancing the Frequency Adaptability of Periodic Current Controllers with a Fixed Sampling Rate for Grid-Connected Power Converters

    DEFF Research Database (Denmark)

    Yang, Yongheng; Zhou, Keliang; Blaabjerg, Frede

    2016-01-01

    Grid-connected power converters should employ advanced current controllers, e.g., Proportional Resonant (PR) and Repetitive Controllers (RC), in order to produce high-quality feed-in currents that are required to be synchronized with the grid. The synchronization is actually to detect...... of the resonant controllers and by approximating the fractional delay using a Lagrange interpolating polynomial for the RC, respectively, the frequency-variation-immunity of these periodic current controllers with a fixed sampling rate is improved. Experiments on a single-phase grid-connected system are presented...... the instantaneous grid information (e.g., frequency and phase of the grid voltage) for the current control, which is commonly performed by a Phase-Locked-Loop (PLL) system. Hence, harmonics and deviations in the estimated frequency by the PLL could lead to current tracking performance degradation, especially...

  8. Fungic microflora of Panicum maximum and Styosanthes spp. commercial seed / Microflora fúngica de sementes comerciais de Panicum maximum e Stylosanthes spp.

    Directory of Open Access Journals (Sweden)

    Larissa Rodrigues Fabris

    2010-09-01

    Full Text Available The sanitary quality of 26 lots commercial seeds of tropical forages, produced in different regions (2004-05 and 2005-06 was analyzed. The lots were composed of seeds of Panicum maximum ('Massai', 'Mombaça' e 'Tanzânia' and stylo ('Estilosantes Campo Grande' - ECG. Additionally, seeds of two lots of P. maximum for exportation were analyzed. The blotter test was used, at 20ºC under alternating light and darkness in a 12 h photoperiod, for seven days. The Aspergillus, Cladosporium and Rhizopus genus consisted the secondary or saprophytes fungi (FSS with greatest frequency in P. maximum lots. In general, there was low incidence of these fungus in the seeds. In relation to pathogenic fungi (FP, it was detected high frequency of contaminated lots by Bipolaris, Curvularia, Fusarium and Phoma genus. Generally, there was high incidence of FP in P. maximum seeds. The occurrence of Phoma sp. was hight, because in 81% of the lots showed incidence superior to 50%. In 'ECG' seeds it was detected FSS (Aspergillus, Cladosporium, and Penicillium genus and FP (Bipolaris, Curvularia, Fusarium and Phoma genus, usually, in low incidence. FSS and FP were associated to P. maximum seeds for exportation, with significant incidence in some cases. The results indicated that there was a limiting factor in all producer regions regarding sanitary quality of the seeds.Sementes comerciais de forrageiras tropicais, pertencente a 26 lotes produzidos em diferentes regiões (safras 2004-05 e 2005-06, foram avaliadas quanto à sanidade. Foram analisadas sementes de cultivares de Panicum maximum (Massai, Mombaça e Tanzânia e de estilosantes (Estilosantes Campo Grande – ECG. Adicionalmente, avaliou-se a qualidade sanitária de dois lotes de sementes de P. maximum destinados à exportação. Para isso, as sementes foram submetidas ao teste de papel de filtro em gerbox, os quais foram incubados a 20ºC, com fotoperíodo de 12 h, durante sete dias. Os fungos saprófitos ou

  9. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  10. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  11. EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON ...

    African Journals Online (AJOL)

    EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON OUTPUT OF ... the use of Ordinary Least Square (OLS) estimation technique was used in analyzing ... frequency of cutting that would produce maximum output of the vegetable as ...

  12. Characteristic of selected frequency luminescence for samples collected in deserts north to Beijing

    International Nuclear Information System (INIS)

    Li Dongxu; Wei Mingjian; Wang Junping; Pan Baolin; Zhao Shiyuan; Liu Zhaowen

    2009-01-01

    Surface sand samples were collected in eight sites of the Horqin and Otindag deserts located in north to Beijing. BG2003 luminescence spectrograph was used to analyze the emitted photons and characteristic spectra of the selected frequency luminescence were obtained. It was found that high intensities of emitted photons stimulated by heat from 85 degree C-135 degree C and 350 degree C-400 degree C. It belong to the traps of 4.13 eV (300 nm), 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm), and the emitted photons belong to traps of 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm) were stimulated by green laser. And sand samples of the eight sites can respond to the increase of definite radiological dose at each wavelength, which is the characteristic spectrum to provide radiation dosimetry basis for dating. There are definite district characteristic in their characteristic spectra. (authors)

  13. Elastic-plastic response characteristics during frequency nonstationary waves

    International Nuclear Information System (INIS)

    Miyama, T.; Kanda, J.; Iwasaki, R.; Sunohara, H.

    1987-01-01

    The purpose of this paper is to study fundamental effects of the frequency nonstationarity on the inelastic responses. First, the inelastic response characteristics are examined by applying stationary waves. Then simple representation of nonstationary characteristics is considered to general nonstationary input. The effects for frequency nonstationary response are summarized for inelastic systems. The inelastic response characteristics under white noise and simple frequency nonstationary wave were investigated, and conclusions can be summarized as follows. 1) The maximum response values for both BL model and OO model corresponds fairly well with those estimated from the energy constant law, even when R is small. For the OO model, the maximum displacement response forms a unique curve except for very small R. 2) The plastic deformation for the BL model is affected by wide frequency components, as R decreases. The plastic deformation for the OO model can be determined from the last stiffness. 3). The inelastic response of the BL model is considerably affected by the frequency nonstationarity of the input motion, while the response is less affected by the nonstationarity for OO model. (orig./HP)

  14. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  15. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  16. Surface analyses of electropolished niobium samples for superconducting radio frequency cavity

    Energy Technology Data Exchange (ETDEWEB)

    Tyagi, P. V.; Nishiwaki, M.; Saeki, T.; Sawabe, M.; Hayano, H.; Noguchi, T.; Kato, S. [GUAS, Tsukuba, Ibaraki 305-0801 (Japan); KEK, Tsukuba, Ibaraki 305-0801 (Japan); KAKEN Inc., Hokota, Ibaraki 311-1416 (Japan); GUAS, Tsukuba, Ibaraki 305-0801 (Japan) and KEK, Tsukuba, Ibaraki 305-0801 (Japan)

    2010-07-15

    The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb{sub 2}O{sub 5}) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

  17. Single crystal growth and nonlinear optical properties of Nd3+ doped STGS crystal for self-frequency-doubling application

    Science.gov (United States)

    Chen, Feifei; Wang, Lijuan; Wang, Xinle; Cheng, Xiufeng; Yu, Fapeng; Wang, Zhengping; Zhao, Xian

    2017-11-01

    The self-frequency-doubling crystal is an important kind of multi-functional crystal materials. In this work, Nd3+ doped Sr3TaGa3Si2O14 (Nd:STGS) single crystals were successfully grown by using Czochralski pulling method, in addition, the nonlinear and laser-frequency-doubling properties of Nd:STGS crystals were studied. The continuous-wave laser at 1064 nm was demonstrated along different physical axes, where the maximum output power was obtained to be 295 mW for the Z-cut samples, much higher than the Y-cut (242 mW) and X-cut (217 mW) samples. Based on the measured refractive indexes, the phase matching directions were discussed and determined for type I (42.5°, 30°) and type II (69.5°, 0°) crystal cuts. As expected, self-frequency-doubling green laser at 529 nm was achieved with output powers being around 16 mW and 12 mW for type I and type II configurations, respectively.

  18. Measuring Coupling of Rhythmical Time Series Using Cross Sample Entropy and Cross Recurrence Quantification Analysis

    Directory of Open Access Journals (Sweden)

    John McCamley

    2017-01-01

    Full Text Available The aim of this investigation was to compare and contrast the use of cross sample entropy (xSE and cross recurrence quantification analysis (cRQA measures for the assessment of coupling of rhythmical patterns. Measures were assessed using simulated signals with regular, chaotic, and random fluctuations in frequency, amplitude, and a combination of both. Biological data were studied as models of normal and abnormal locomotor-respiratory coupling. Nine signal types were generated for seven frequency ratios. Fifteen patients with COPD (abnormal coupling and twenty-one healthy controls (normal coupling walked on a treadmill at three speeds while breathing and walking were recorded. xSE and the cRQA measures of percent determinism, maximum line, mean line, and entropy were quantified for both the simulated and experimental data. In the simulated data, xSE, percent determinism, and entropy were influenced by the frequency manipulation. The 1 : 1 frequency ratio was different than other frequency ratios for almost all measures and/or manipulations. The patients with COPD used a 2 : 3 ratio more often and xSE, percent determinism, maximum line, mean line, and cRQA entropy were able to discriminate between the groups. Analysis of the effects of walking speed indicated that all measures were able to discriminate between speeds.

  19. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    Science.gov (United States)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  20. Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power

    DEFF Research Database (Denmark)

    Yang, Yongheng; Wang, Huai; Blaabjerg, Frede

    2014-01-01

    Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level...... devices, allowing a quantitative prediction of the power device lifetime. A study case on a 3 kW single-phase PV inverter has demonstrated the advantages of the CPG control in terms of improved reliability.......Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level....... The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...

  1. Frequency of aggressive behaviors in a nationally representative sample of Iranian children and adolescents: The CASPIAN-IV study

    Directory of Open Access Journals (Sweden)

    Morteza Sadinejad

    2015-01-01

    Full Text Available Background: This study aims to explore the frequency of aggressive behaviors among a nationally representative sample of Iranian children and adolescents. Methods: This nationwide study was performed on a multi-stage sample of 6-18 years students, living in 30 provinces in Iran. Students were asked to confidentially report the frequency of aggressive behaviors including physical fighting, bullying and being bullied in the previous 12 months, using the questionnaire of the World Health Organization Global School Health Survey. Results: In this cross-sectional study, 13,486 students completed the study (90.6% participation rate; they consisted of 49.2% girls and 75.6% urban residents. The mean age of participants was 12.47 years (95% confidence interval: 12.29, 12.65. In total, physical fight was more prevalent among boys than girls (48% vs. 31%, P < 0.001. Higher rates of involvement in two other behaviors namely being bullied and bulling to other classmates had a higher frequency among boys compared to girls (29% vs. 25%, P < 0.001 for being bullied and (20% vs. 14%, P < 0.001 for bulling to others. Physical fighting was more prevalent among rural residents (40% vs. 39%, respectively, P = 0.61, while being bullied was more common among urban students (27% vs. 26%, respectively, P = 0.69. Conclusions: Although in this study the frequency of aggressive behaviors was lower than many other populations, still these findings emphasize on the importance of designing preventive interventions that target the students, especially in early adolescence, and to increase their awareness toward aggressive behaviors. Implications for future research and aggression prevention programming are recommended.

  2. Improved MIMO radar GMTI via cyclic-shift transmission of orthogonal frequency division signals

    Science.gov (United States)

    Li, Fuyou; He, Feng; Dong, Zhen; Wu, Manqing

    2018-05-01

    Minimum detectable velocity (MDV) and maximum detectable velocity are both important in ground moving target indication (GMTI) systems. Smaller MDV can be achieved by longer baseline via multiple-input multiple-output (MIMO) radar. Maximum detectable velocity is decided by blind velocities associated with carrier frequencies, and blind velocities can be mitigated by orthogonal frequency division signals. However, the scattering echoes from different carrier frequencies are independent, which is not good for improving MDV performance. An improved cyclic-shift transmission is applied in MIMO GMTI system in this paper. MDV performance is improved due to the longer baseline, and maximum detectable velocity performance is improved due to the mitigation of blind velocities via multiple carrier frequencies. The signal model for this mode is established, the principle of mitigating blind velocities with orthogonal frequency division signals is presented; the performance of different MIMO GMTI waveforms is analysed; and the performance of different array configurations is analysed. Simulation results by space-time-frequency adaptive processing proves that our proposed method is a valid way to improve GMTI performance.

  3. Dynamic Performance of Maximum Power Point Trackers in TEG Systems Under Rapidly Changing Temperature Conditions

    Science.gov (United States)

    Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.

    2016-03-01

    Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG

  4. PATTERNS OF THE MAXIMUM RAINFALL AMOUNTS REGISTERED IN 24 HOURS WITHIN THE OLTENIA PLAIN

    Directory of Open Access Journals (Sweden)

    ALINA VLĂDUŢ

    2012-03-01

    Full Text Available Patterns of the maximum rainfall amounts registered in 24 hours within the Oltenia Plain. The present study aims at rendering the main features of the maximum rainfall amounts registered in 24 h within the Oltenia Plain. We used 30-year time series (1980-2009 for seven meteorological stations. Generally, the maximum amounts in 24 h display the same pattern as the monthly mean amounts, namely higher values in the interval May-October. In terms of mean values, the highest amounts are registered in the western and northern extremity of the plain. The maximum values generally exceed 70 mm at all meteorological stations: D.T. Severin, 224 mm, July 1999; Slatina, 104.8 mm, August 2002; Caracal, 92.2 m, July 1991; Bechet, 80.8 mm, July 2006; Craiova, 77.6 mm, April 2003. During the cold season, there was noticed a greater uniformity all over the plain, due to the cyclonic origin of rainfalls compared to the warm season, when thermal convection is quite active and it triggers local showers. In order to better emphasize the peculiarities of this parameter, we have calculated the frequency on different value classes (eight classes, as well as the probability of appearance of different amounts. Thus, it resulted that the highest frequency (25-35% is held by the first two classes of values (0-10 mm; 10.1-20 mm. The lowest frequency is registered in case of the amounts of more than 100 mm, which generally display a probability of occurrence of less than 1% and only in the western and eastern extremities of the plain.

  5. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  6. Y-STR frequency surveying method

    DEFF Research Database (Denmark)

    Willuweit, Sascha; Caliebe, Amke; Andersen, Mikkel Meyer

    2011-01-01

    Reasonable formalized methods to estimate the frequencies of DNA profiles generated from lineage markers have been proposed in the past years and were discussed in the forensic community. Recently, collections of population data on the frequencies of variations in Y chromosomal STR profiles have...... reached a new quality with the establishment of the comprehensive neatly quality-controlled reference database YHRD. Grounded on such unrivalled empirical material from hundreds of populations studies the core assumption of the Haplotype Frequency Surveying Method originally described 10 years ago can...... be tested and improved. Here we provide new approaches to calculate the parameters used in the frequency surveying method: a maximum likelihood estimation of the regression parameters (r1, r2, s1 and s2) and a revised Frequency Surveying framework with variable binning and a database preprocessing to take...

  7. Evolution of concentration-discharge relations revealed by high frequency diurnal sampling of stream water during spring snowmelt

    Science.gov (United States)

    Olshansky, Y.; White, A. M.; Thompson, M.; Moravec, B. G.; McIntosh, J. C.; Chorover, J.

    2017-12-01

    Concentration discharge (C-Q) relations contain potentially important information on critical zone (CZ) processes including: weathering reactions, water flow paths and nutrient export. To examine the C-Q relations in a small (3.3 km2) headwater catchment - La Jara Creek located in the Jemez River Basin Critical Zone Observatory, daily, diurnal stream water samples were collected during spring snow melt 2017, from two flumes located in outlets of the La Jara Creek and a high elevation zero order basin within this catchment. Previous studies from this site (McIntosh et al., 2017) suggested that high frequency sampling was needed to improve our interpretation of C-Q relations. The dense sampling covered two ascending and two descending limbs of the snowmelt hydrograph, from March 1 to May 15, 2017. While Na showed inverse correlation (dilution) with discharge, most other solutes (K, Mg, Fe, Al, dissolved organic carbon) exhibited positive (concentration) or chemostatic trends (Ca, Mn, Si, dissolved inorganic carbon and dissolved nitrogen). Hysteresis in the C-Q relation was most pronounced for bio-cycled cations (K, Mg) and for Fe, which exhibited concentration during the first ascending limb followed by a chemostatic trend. A pulsed increase in Si concentration immediately after the first ascending limb in both flumes suggests mixing of deep groundwater with surface water. A continual increase in Ge/Si concentrations followed by a rapid decrease after the second rising limb may suggest a fast transition between soil water to ground water dominating the stream flow. Fourier transform infrared spectroscopy of selected samples across the hydrograph demonstrated pronounced changes in dissolved organic matter molecular composition with the advancement of the spring snow melt. X-ray micro-spectroscopy of colloidal material isolated from the collected water samples indicated a significant role for organic matter in the transport of inorganic colloids. Analyses of high

  8. Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Bo-Hee Choi

    2016-01-01

    Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.

  9. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  10. Variation of Probable Maximum Precipitation in Brazos River Basin, TX

    Science.gov (United States)

    Bhatia, N.; Singh, V. P.

    2017-12-01

    The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was

  11. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  12. Maximum generation power evaluation of variable frequency offshore wind farms when connected to a single power converter

    Energy Technology Data Exchange (ETDEWEB)

    Gomis-Bellmunt, Oriol; Sumper, Andreas [Centre d' Innovacio Tecnologica en Convertidors Estatics i Accionaments (CITCEA-UPC), Universitat Politecnica de Catalunya UPC, Av. Diagonal, 647, Pl. 2, 08028 Barcelona (Spain); IREC Catalonia Institute for Energy Research, Barcelona (Spain); Junyent-Ferre, Adria; Galceran-Arellano, Samuel [Centre d' Innovacio Tecnologica en Convertidors Estatics i Accionaments (CITCEA-UPC), Universitat Politecnica de Catalunya UPC, Av. Diagonal, 647, Pl. 2, 08028 Barcelona (Spain)

    2010-10-15

    The paper deals with the evaluation of power generated by variable and constant frequency offshore wind farms connected to a single large power converter. A methodology to analyze different wind speed scenarios and system electrical frequencies is presented and applied to a case study, where it is shown that the variable frequency wind farm concept (VF) with a single power converter obtains 92% of the total available power, obtained with individual power converters in each wind turbine (PC). The PC scheme needs multiple power converters implying drawbacks in terms of cost, maintenance and reliability. The VF scheme is also compared to a constant frequency scheme CF, and it is shown that a significant power increase of more than 20% can be obtained with VF. The case study considers a wind farm composed of four wind turbines based on synchronous generators. (author)

  13. High frequency electromagnetic reflection loss performance of substituted Sr-hexaferrite nanoparticles/SWCNTs/epoxy nanocomposite

    Energy Technology Data Exchange (ETDEWEB)

    Gordani, Gholam Reza, E-mail: gordani@gmail.com [Materials Engineering Department, Malek Ashtar University of Technology, Shahin Shahr (Iran, Islamic Republic of); Ghasemi, Ali [Materials Engineering Department, Malek Ashtar University of Technology, Shahin Shahr (Iran, Islamic Republic of); Saidi, Ali [Department of Materials Science and Engineering, Isfahan University of Technology, Isfahan (Iran, Islamic Republic of)

    2015-10-01

    In this study, the electromagnetic properties of a novel nanocomposite material made of substituted Sr-hexaferrite nanoparticles and different percentage of single walled carbon nanotube have been studied. The structural, magnetic and electromagnetic properties of samples were studied as a function of volume percentage of SWCNTs by X-ray diffraction, Fourier transform infrared spectroscopy, scanning electron microscopy, transmission electron microscopy, vibrating sample magnetometer and vector network analysis. Well suitable crystallinity of hexaferrite nanoparticles was confirmed by XRD patterns. TEM and FESEM micrographs were shown the good homogenity and high level of dispersivity of SWCNTs and Sr-hexaferrite nanoparticles in nanocomposite samples. The VSM results shown that with increasing in amount of CNTs (0–6 vol%), the saturation of magnetization decreased up to 11 emu/g for nanocomposite sample contains of 6 vol% of SWCNTs. The vector network analysis results show that the maximum value of reflection loss was −36.4 dB at the frequency of 11 GHz with an absorption bandwidth of more than 4 GHz (<−20 dB). The results indicate that, this nanocomposite material with appropriate amount of SWCNTs hold great promise for microwave device applications. - Highlights: • We investigate the high frequency properties of Sr-hexaferrite/SWCNTs composite. • Saturation magnetization of nanocomposites is decreased with presence of SWCNTs. • The ferrite/CNTs nanocomposite sample covers whole X-band frequencies (8–12 GHz). • The ferrite/CNTs nanocomposite can be used as a potential magnetic loss material. • Nanocomposite contain 4 vol% of CNTs have shown greater than 99% of reflection loss.

  14. Detection of Ochratoxin A in bread samples in Shahrekord city, Iran, 2011-2012

    Directory of Open Access Journals (Sweden)

    Mehran Erfani

    2013-12-01

    Results: Ochratoxin A was detected in 45 out of the 86 bread samples (52.3%. Levels of OTA in positive samples ranged between 0.19 and 10.37 ng/g and the average contamination of all positive samples was 3.04 ng/g . The highest frequency of positive samples was related to machinery Taftoon (88.8% and Lavash bread (81.8%. The most contaminated sample (5.39 ng/g was found in the Iranian Lavash bread. Fifteen of the positive samples exceed the maximum level of 5 ng/g set by European regulations for OTA in cereal and bread. Conclusion: The results of this study indicated that contamination levels of ochratoxin A were high in part of the samples (17.4%. Bread and cereals are considered to be the main and predominant ingredient of Iranian food; therefore, their contamination can have long-term negative impact on people's health.

  15. Resonant difference-frequency atomic force ultrasonic microscope

    Science.gov (United States)

    Cantrell, John H. (Inventor); Cantrell, Sean A. (Inventor)

    2010-01-01

    A scanning probe microscope and methodology called resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM), employs an ultrasonic wave launched from the bottom of a sample while the cantilever of an atomic force microscope, driven at a frequency differing from the ultrasonic frequency by one of the contact resonance frequencies of the cantilever, engages the sample top surface. The nonlinear mixing of the oscillating cantilever and the ultrasonic wave in the region defined by the cantilever tip-sample surface interaction force generates difference-frequency oscillations at the cantilever contact resonance. The resonance-enhanced difference-frequency signals are used to create images of nanoscale near-surface and subsurface features.

  16. Frequency dependent characteristics of solar impulsive radio bursts

    International Nuclear Information System (INIS)

    Das, T.K.; Das Gupta, M.K.

    1983-01-01

    An investigation was made of the impulsive radio bursts observed in the frequency range 0.245 to 35 GHz. Important results obtained are: (i) Simple type 1 bursts with intensities 0 to 10 f.u. and simple type 2 bursts with intensities 10 to 500 f.u. are predominant in the frequency ranges 1.415 to 4.995 GHz and 4.995 to 8.8 GHz, respectively; (ii) With maxima around 2.7 GHz and 4 GHz for the first and second types respectively, the durations of the radio bursts decrease gradually both towards lower and higher frequencies; (iii) As regards occurrences, the first type dominates in the southern solar hemisphere peaking around 8.8 GHz, whereas the second type favours the north with no well-defined maximum in any frequency; (iv) Both types prefer the eastern hemisphere, the peak occurrences being around 8.8 GHz and 5 GHz for the two successive types, respectively; (c) The spectra of impulsive radio bursts are generally of the inverted U-type with the maximum emission intensity between 5 and 15 GHz. (author)

  17. Examination into the maximum rotational frequency for an in-plane switched active waveplate device

    International Nuclear Information System (INIS)

    Davidson, A J; Elston, S J; Raynes, E P

    2005-01-01

    An examination of an active waveplate device using a one-dimensional model, giving numerical and analytical results, is presented. The model calculates the director and twist configuration by minimizing the free energy of the system with simple homeotropic boundary conditions. The effect of varying the in-plane electric field in both magnitude and direction is examined, and it is shown that the twist through the cell is constant in time as the field is rotated. As the electric field is rotated, the director field lags behind by an angle which increases as the frequency of the electric field rotation increases. When this angle reaches approximately π/4 the director field no longer follows the electric field in a uniform way. Using mathematical analysis it is shown that the conditions on which the director profile will fail to follow the rotating electric field depend on the frequency of electric field rotation, the magnitude of the electric field, the dielectric anisotropy and the viscosity of the liquid crystal

  18. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  19. Zoonotic species of the genus Arcobacter in poultry from different regions of Costa Rica: frequency of isolation and comparison of two types of sampling

    International Nuclear Information System (INIS)

    Valverde Bogantes, Esteban

    2014-01-01

    The presence of the zoonotic species of Arcobacter are evaluated in laying hens, broilers, ducks and geese of Costa Rica. The frequency of isolation of the genus Arcobacter is determined in samples of poultry using culture methods and molecular techniques. The performance of cloacal swab sampling and fecal collection is compared from poultry for isolation of Arcobacter. The isolation frequencies of Arcobacter species in poultry have indicated a potential public health problem in Costa Rica. Poultry are determined as sources of contamination and dispersion of the bacteria [es

  20. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  1. Frequency and antimicrobial susceptibility of acinetobacter species isolated from blood samples of paediatric patients

    International Nuclear Information System (INIS)

    Javed, A.; Zafar, A.; Ejaz, H.; Zubair, M.

    2012-01-01

    Objective: Acinetobacter species is a major nosocomial pathogen causing serious infections in immuno-compromised and hospitalized patients. The aim of this study was to determine the frequency and antimicrobial susceptibility pattern of Acinetobacter species in blood samples of paediatric patients. Methodology: This cross sectional observational study was conducted during January to October, 2011 at The Children's Hospital and Institute of Child Health, Lahore. A total number of 12,032 blood samples were analysed during the study period. Acinetobacter species were Bauer disc diffusion method. Results: The blood cultures showed growth in 1,141 cultures out of which 46 (4.0%) were Acinetobacter species. The gender distribution of Acinetobacter species was 29 (63.0%) in males and 17 (37.0%) in females. A good antimicrobial susceptibility pattern of Acinetobacter species was seen with sulbactam-cefoperazone (93.0%), imepenem and meropenem (82.6% (30.4%) was poor. Conclusion: The results of the present study shows high rate of resistance of Acinetobacter species with cephalosporins in nosocomial infections. The sulbactam-cefoperazone, carbapenems and piperacillin-tazobactam showed effective antimicrobial susceptibility against Acinetobacter species. (author)

  2. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  3. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  4. THE IMPACT OF FREQUENCY STANDARDS ON COHERENCE IN VLBI AT THE HIGHEST FREQUENCIES

    Energy Technology Data Exchange (ETDEWEB)

    Rioja, M.; Dodson, R. [ICRAR, University of Western Australia, Perth (Australia); Asaki, Y. [Institute of Space and Astronautical Science, 3-1-1 Yoshinodai, Chuou, Sagamihara, Kanagawa 252-5210 (Japan); Hartnett, J. [School of Physics, University of Western Australia, Perth (Australia); Tingay, S., E-mail: maria.rioja@icrar.org [ICRAR, Curtin University, Perth (Australia)

    2012-10-01

    We have carried out full imaging simulation studies to explore the impact of frequency standards in millimeter and submillimeter very long baseline interferometry (VLBI), focusing on the coherence time and sensitivity. In particular, we compare the performance of the H-maser, traditionally used in VLBI, to that of ultra-stable cryocooled sapphire oscillators over a range of observing frequencies, weather conditions, and analysis strategies. Our simulations show that at the highest frequencies, the losses induced by H-maser instabilities are comparable to those from high-quality tropospheric conditions. We find significant benefits in replacing H-masers with cryocooled sapphire oscillator based frequency references in VLBI observations at frequencies above 175 GHz in sites which have the best weather conditions; at 350 GHz we estimate a 20%-40% increase in sensitivity over that obtained when the sites have H-masers, for coherence losses of 20%-10%, respectively. Maximum benefits are to be expected by using co-located Water Vapor Radiometers for atmospheric correction. In this case, we estimate a 60%-120% increase in sensitivity over the H-maser at 350 GHz.

  5. Effects of diurnal emission patterns and sampling frequency on precision of measurement methods for daily ammonia emissions from animal houses

    NARCIS (Netherlands)

    Estelles, F.; Calvet, S.; Ogink, N.W.M.

    2010-01-01

    Ammonia concentrations and airflow rates are the main parameters needed to determine ammonia emissions from animal houses. It is possible to classify their measurement methods into two main groups according to the sampling frequency: semi-continuous and daily average measurements. In the first

  6. Flood frequency analysis for nonstationary annual peak records in an urban drainage basin

    Science.gov (United States)

    Villarini, Gabriele; Smith, James A.; Serinaldi, Francesco; Bales, Jerad; Bates, Paul D.; Krajewski, Witold F.

    2009-08-01

    Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110km) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1mskm to a maximum of 5.1mskm. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2mskm). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2mskm ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades.

  7. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  8. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  9. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  10. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  11. Maximum a posteriori covariance estimation using a power inverse wishart prior

    DEFF Research Database (Denmark)

    Nielsen, Søren Feodor; Sporring, Jon

    2012-01-01

    The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...

  12. Quantitative Assessment of Detection Frequency for the INL Ambient Air Monitoring Network

    Energy Technology Data Exchange (ETDEWEB)

    Sondrup, A. Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rood, Arthur S. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-11-01

    Emission Standards for Hazardous Air Pollutants maximum exposed individual location (i.e., Frenchman’s Cabin) was no more than 0.1 mrem yr–1 (i.e., 1% of the 10 mrem yr–1 standard). Detection frequencies were calculated separately for the onsite and offsite monitoring network. As expected, detection frequencies were generally less for the offsite sampling network compared to the onsite network. Overall, the monitoring network is very effective at detecting the potential releases of Cs-137 or Sr-90 from all sources/facilities using either the ESER or BEA MDAs. The network was less effective at detecting releases of Pu-239. Maximum detection frequencies for Pu-239 using ESER MDAs ranged from 27.4 to 100% for onsite samplers and 3 to 80% for offsite samplers. Using BEA MDAs, the maximum detection frequencies for Pu-239 ranged from 2.1 to 100% for onsite samplers and 0 to 5.9% for offsite samplers. The only release that was not detected by any of the samplers under any conditions was a release of Pu-239 from the Idaho Nuclear Technology and Engineering Center main stack (CPP-708). The methodology described in this report could be used to improve sampler placement and detection frequency, provided clear performance objectives are defined.

  13. Robust Frequency and Voltage Stability Control Strategy for Standalone AC/DC Hybrid Microgrid

    Directory of Open Access Journals (Sweden)

    Furqan Asghar

    2017-05-01

    Full Text Available The microgrid (MG concept is attracting considerable attention as a solution to energy deficiencies, especially in remote areas, but the intermittent nature of renewable sources and varying loads cause many control problems and thereby affect the quality of power within a microgrid operating in standalone mode. This might cause large frequency and voltage deviations in the system due to unpredictable output power fluctuations. Furthermore, without any main grid support, it is more complex to control and manage the system. In past, droop control and various other coordination control strategies have been presented to stabilize the microgrid frequency and voltages, but in order to utilize the available resources up to their maximum capacity in a positive way, new and robust control mechanisms are required. In this paper, a standalone microgrid is presented, which integrates renewable energy-based distributed generations and local loads. A fuzzy logic-based intelligent control technique is proposed to maintain the frequency and DC (direct current-link voltage stability for sudden changes in load or generation power. Also from a frequency control perspective, a battery energy storage system (BESS is suggested as a replacement for a synchronous generator to stabilize the nominal system frequency as a synchronous generator is unable to operate at its maximum efficiency while being controlled for stabilization purposes. Likewise, a super capacitor (SC and BESS is used to stabilize DC bus voltages even though maximum possible energy is being extracted from renewable generated sources using maximum power point tracking. This newly proposed control method proves to be effective by reducing transient time, minimizing the frequency deviations, maintaining voltages even though maximum power point tracking is working and preventing generators from exceeding their power ratings during disturbances. However, due to the BESS limited capacity, load switching

  14. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  15. Identification of hydrologic and geochemical pathways using high frequency sampling, REE aqueous sampling and soil characterization at Koiliaris Critical Zone Observatory, Crete

    Energy Technology Data Exchange (ETDEWEB)

    Moraetis, Daniel, E-mail: moraetis@mred.tuc.gr [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece); Stamati, Fotini; Kotronakis, Manolis; Fragia, Tasoula; Paranychnianakis, Nikolaos; Nikolaidis, Nikolaos P. [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece)

    2011-06-15

    Highlights: > Identification of hydrological and geochemical pathways within a complex watershed. > Water increased N-NO{sub 3} concentration and E.C. values during flash flood events. > Soil degradation and impact on water infiltration within the Koiliaris watershed. > Analysis of Rare Earth Elements in water bodies for identification of karstic water. - Abstract: Koiliaris River watershed is a Critical Zone Observatory that represents severely degraded soils due to intensive agricultural activities and biophysical factors. It has typical Mediterranean soils under the imminent threat of desertification which is expected to intensify due to projected climate change. High frequency hydro-chemical monitoring with targeted sampling for Rare Earth Elements (REE) analysis of different water bodies and geochemical characterization of soils were used for the identification of hydrologic and geochemical pathways. The high frequency monitoring of water chemical data highlighted the chemical alterations of water in Koiliaris River during flash flood events. Soil physical and chemical characterization surveys were used to identify erodibility patterns within the watershed and the influence of soils on surface and ground water chemistry. The methodology presented can be used to identify the impacts of degraded soils to surface and ground water quality as well as in the design of methods to minimize the impacts of land use practices.

  16. A generic statistical methodology to predict the maximum pit depth of a localized corrosion process

    International Nuclear Information System (INIS)

    Jarrah, A.; Bigerelle, M.; Guillemot, G.; Najjar, D.; Iost, A.; Nianga, J.-M.

    2011-01-01

    Highlights: → We propose a methodology to predict the maximum pit depth in a corrosion process. → Generalized Lambda Distribution and the Computer Based Bootstrap Method are combined. → GLD fit a large variety of distributions both in their central and tail regions. → Minimum thickness preventing perforation can be estimated with a safety margin. → Considering its applications, this new approach can help to size industrial pieces. - Abstract: This paper outlines a new methodology to predict accurately the maximum pit depth related to a localized corrosion process. It combines two statistical methods: the Generalized Lambda Distribution (GLD), to determine a model of distribution fitting with the experimental frequency distribution of depths, and the Computer Based Bootstrap Method (CBBM), to generate simulated distributions equivalent to the experimental one. In comparison with conventionally established statistical methods that are restricted to the use of inferred distributions constrained by specific mathematical assumptions, the major advantage of the methodology presented in this paper is that both the GLD and the CBBM enable a statistical treatment of the experimental data without making any preconceived choice neither on the unknown theoretical parent underlying distribution of pit depth which characterizes the global corrosion phenomenon nor on the unknown associated theoretical extreme value distribution which characterizes the deepest pits. Considering an experimental distribution of depths of pits produced on an aluminium sample, estimations of maximum pit depth using a GLD model are compared to similar estimations based on usual Gumbel and Generalized Extreme Value (GEV) methods proposed in the corrosion engineering literature. The GLD approach is shown having smaller bias and dispersion in the estimation of the maximum pit depth than the Gumbel approach both for its realization and mean. This leads to comparing the GLD approach to the GEV one

  17. Effect of misspecification of gene frequency on the two-point LOD score.

    Science.gov (United States)

    Pal, D K; Durner, M; Greenberg, D A

    2001-11-01

    In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.

  18. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  19. High-frequency signal paths in the TMR-86.1 experimental tomography apparatus

    International Nuclear Information System (INIS)

    Obrcian, J.; Jellus, V.; Weis, J.; Frollo, I.

    1990-01-01

    The NMR-based TMR-86.1 tomography apparatus, developed at the Institute of Measurement and Measuring Instrumentation, Slovak Academy of Sciences in Bratislava, Czechoslovakia, enables imaging of the inner structure of biological samples and human limbs no more than 110 mm in diameter, using a measuring matrix containing at most 128x128 elements. The imaged matrix can possess a maximum of 256x256 image elements with 256 brightness steps. The signal paths of the high-frequency excitation-imaging complex of the apparatus are described. Some functional blocks of the apparatus can be used without substantial modifications for the imaging of larger objects such as the human body. From the point of view of the high-frequency pulses for nonselective excitation (so-called 180deg-pulses), the excitation pulse power will have to be increased to at least 1 kW. (author). 5 figs, 7 refs

  20. Estimate of annual daily maximum rainfall and intense rain equation for the Formiga municipality, MG, Brazil

    Directory of Open Access Journals (Sweden)

    Giovana Mara Rodrigues Borges

    2016-11-01

    Full Text Available Knowledge of the probabilistic behavior of rainfall is extremely important to the design of drainage systems, dam spillways, and other hydraulic projects. This study therefore examined statistical models to predict annual daily maximum rainfall as well as models of heavy rain for the city of Formiga - MG. To do this, annual maximum daily rainfall data were ranked in decreasing order that best describes the statistical distribution by exceedance probability. Daily rainfall disaggregation methodology was used for the intense rain model studies and adjusted with Intensity-Duration-Frequency (IDF and Exponential models. The study found that the Gumbel model better adhered to the data regarding observed frequency as indicated by the Chi-squared test, and that the exponential model best conforms to the observed data to predict intense rains.

  1. Modelling of word usage frequency dynamics using artificial neural network

    International Nuclear Information System (INIS)

    Maslennikova, Yu S; Bochkarev, V V; Voloskov, D S

    2014-01-01

    In this paper the method for modelling of word usage frequency time series is proposed. An artificial feedforward neural network was used to predict word usage frequencies. The neural network was trained using the maximum likelihood criterion. The Google Books Ngram corpus was used for the analysis. This database provides a large amount of data on frequency of specific word forms for 7 languages. Statistical modelling of word usage frequency time series allows finding optimal fitting and filtering algorithm for subsequent lexicographic analysis and verification of frequency trend models

  2. Signature of a possible relationship between the maximum CME speed index and the critical frequencies of the F1 and F2 ionospheric layers: Data analysis for a mid-latitude ionospheric station during the solar cycles 23 and 24

    Science.gov (United States)

    Kilcik, Ali; Ozguc, Atila; Yiǧit, Erdal; Yurchyshyn, Vasyl; Donmez, Burcin

    2018-06-01

    We analyze temporal variations of two solar indices, the monthly mean Maximum CME Speed Index (MCMESI) and the International Sunspot Number (ISSN) as well as the monthly median ionospheric critical frequencies (foF1, and foF2) for the time period of 1996-2013, which covers the entire solar cycle 23 and the ascending branch of the cycle 24. We found that the maximum of foF1 and foF2 occurred respectively during the first and second maximum of the ISSN solar activity index in the solar cycle 23. We compared these data sets by using the cross-correlation and hysteresis analysis and found that both foF1 and foF2 show higher correlation with ISSN than the MCMESI during the investigated time period, but when significance levels are considered correlation coefficients between the same indices become comparable. Cross-correlation analysis showed that the agreement between these data sets (solar indices and ionospheric critical frequencies) is better pronounced during the ascending phases of solar cycles, while they display significant deviations during the descending phase. We conclude that there exists a signature of a possible relationship between MCMESI and foF1 and foF2, which means that MCMESI could be used as a possible indicator of solar and geomagnetic activity, even though other investigations are needed.

  3. Analysis of the maximum likelihood channel estimator for OFDM systems in the presence of unknown interference

    Science.gov (United States)

    Dermoune, Azzouz; Simon, Eric Pierre

    2017-12-01

    This paper is a theoretical analysis of the maximum likelihood (ML) channel estimator for orthogonal frequency-division multiplexing (OFDM) systems in the presence of unknown interference. The following theoretical results are presented. Firstly, the uniqueness of the ML solution for practical applications, i.e., when thermal noise is present, is analytically demonstrated when the number of transmitted OFDM symbols is strictly greater than one. The ML solution is then derived from the iterative conditional ML (CML) algorithm. Secondly, it is shown that the channel estimate can be described as an algebraic function whose inputs are the initial value and the means and variances of the received samples. Thirdly, it is theoretically demonstrated that the channel estimator is not biased. The second and the third results are obtained by employing oblique projection theory. Furthermore, these results are confirmed by numerical results.

  4. Flaw-size measurement in a weld samples by ultrasonic frequency analysis

    International Nuclear Information System (INIS)

    Adler, L.; Cook, K.V.; Whaley, H.L. Jr.; McClung, R.W.

    1975-01-01

    An ultrasonic frequency-analysis technique was developed and applies to characterize flaws in an 8-in. (203-mm) thick heavy-section steel weld specimen. The technique applies a multitransducer system. The spectrum of the received broad-band signal is frequency analyzed at two different receivers for each of the flaws. From the two spectra, the size and orientation of the flaw are determined by the use of an analytic model proposed earlier. (auth)

  5. Active Faraday optical frequency standard.

    Science.gov (United States)

    Zhuang, Wei; Chen, Jingbiao

    2014-11-01

    We propose the mechanism of an active Faraday optical clock, and experimentally demonstrate an active Faraday optical frequency standard based on narrow bandwidth Faraday atomic filter by the method of velocity-selective optical pumping of cesium vapor. The center frequency of the active Faraday optical frequency standard is determined by the cesium 6 (2)S(1/2) F=4 to 6 (2)P(3/2) F'=4 and 5 crossover transition line. The optical heterodyne beat between two similar independent setups shows that the frequency linewidth reaches 281(23) Hz, which is 1.9×10(4) times smaller than the natural linewidth of the cesium 852-nm transition line. The maximum emitted light power reaches 75 μW. The active Faraday optical frequency standard reported here has advantages of narrow linewidth and reduced cavity pulling, which can readily be extended to other atomic transition lines of alkali and alkaline-earth metal atoms trapped in optical lattices at magic wavelengths, making it useful for new generation of optical atomic clocks.

  6. Radio frequency system of the RIKEN ring cyclotron

    International Nuclear Information System (INIS)

    Fujisawa, T.; Ogiwara, K.; Kohara, S.; Oikawa, Y.; Yokoyama, I.; Nagase, M.; Takeshita, I.; Chiba, Y.; Kumata, Y.

    1987-01-01

    The radio-frequency(RF) system of the RIKEN ring cyclotron(K = 540) is required to work in a frequency range of 20 to 45 MHz and to generate the maximum acceleration voltage 250 kV. A new movable box type variable frequency resonator was designed for that purpose. The final amplifier is capable to deliver 300 kW. The resonators and the amplifiers have been installed at RIKEN and the performances are studied. The result shows the movable box type resonator and the power amplifier system satisfy the design aim. (author)

  7. Flood frequency analysis for nonstationary annual peak records in an urban drainage basin

    Science.gov (United States)

    Villarini, G.; Smith, J.A.; Serinaldi, F.; Bales, J.; Bates, P.D.; Krajewski, W.F.

    2009-01-01

    Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110 km2) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1 m3 s- 1 km- 2 to a maximum of 5.1 m3 s- 1 km- 2. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2 m3 s- 1 km- 2). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2 m3 s- 1 km- 2 ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades. ?? 2009 Elsevier Ltd.

  8. 40 CFR 141.803 - Coliform sampling.

    Science.gov (United States)

    2010-07-01

    ...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Aircraft Drinking Water Rule § 141.803 Coliform sampling. (a... aircraft water system, the sampling frequency must be determined by the disinfection and flushing frequency... disinfection and flushing frequency recommended by the aircraft water system manufacturer, when available...

  9. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.

    Directory of Open Access Journals (Sweden)

    Lorenzo Asti

    2016-04-01

    Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.

  10. Percentiles of the null distribution of 2 maximum lod score tests.

    Science.gov (United States)

    Ulgen, Ayse; Yoo, Yun Joo; Gordon, Derek; Finch, Stephen J; Mendell, Nancy R

    2004-01-01

    We here consider the null distribution of the maximum lod score (LOD-M) obtained upon maximizing over transmission model parameters (penetrance values, dominance, and allele frequency) as well as the recombination fraction. Also considered is the lod score maximized over a fixed choice of genetic model parameters and recombination-fraction values set prior to the analysis (MMLS) as proposed by Hodge et al. The objective is to fit parametric distributions to MMLS and LOD-M. Our results are based on 3,600 simulations of samples of n = 100 nuclear families ascertained for having one affected member and at least one other sibling available for linkage analysis. Each null distribution is approximately a mixture p(2)(0) + (1 - p)(2)(v). The values of MMLS appear to fit the mixture 0.20(2)(0) + 0.80chi(2)(1.6). The mixture distribution 0.13(2)(0) + 0.87chi(2)(2.8). appears to describe the null distribution of LOD-M. From these results we derive a simple method for obtaining critical values of LOD-M and MMLS. Copyright 2004 S. Karger AG, Basel

  11. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  12. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  13. Lower bounds on the periodic Hamming correlations of frequency hopping sequences with low hit zone

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, several periodic Hamming correlation lower bounds for frequency hopping sequences with low hit zone, with respect to the size p of the frequency slot set, the sequence length L, the family size M, low hit zone LH ( or no hit zone NH ), the maximum periodic Hamming autocorrelation sidelobe Ha and the maximum periodic Hamming crosscorrelation Hc, are established. It is shown that the new bounds include the known Lempel-Greenberger bounds, T.S. Seay bounds and Peng-Fan bounds for the conventional frequency hopping sequences as special cases.

  14. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  15. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  16. Maximum wind energy extraction strategies using power electronic converters

    Science.gov (United States)

    Wang, Quincy Qing

    2003-10-01

    This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through

  17. Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities

    Science.gov (United States)

    Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.

    2017-07-17

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized

  18. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  19. Nanohertz frequency determination for the gravity probe B high frequency superconducting quantum interference device signal.

    Science.gov (United States)

    Salomon, M; Conklin, J W; Kozaczuk, J; Berberian, J E; Keiser, G M; Silbergleit, A S; Worden, P; Santiago, D I

    2011-12-01

    In this paper, we present a method to measure the frequency and the frequency change rate of a digital signal. This method consists of three consecutive algorithms: frequency interpolation, phase differencing, and a third algorithm specifically designed and tested by the authors. The succession of these three algorithms allowed a 5 parts in 10(10) resolution in frequency determination. The algorithm developed by the authors can be applied to a sampled scalar signal such that a model linking the harmonics of its main frequency to the underlying physical phenomenon is available. This method was developed in the framework of the gravity probe B (GP-B) mission. It was applied to the high frequency (HF) component of GP-B's superconducting quantum interference device signal, whose main frequency f(z) is close to the spin frequency of the gyroscopes used in the experiment. A 30 nHz resolution in signal frequency and a 0.1 pHz/s resolution in its decay rate were achieved out of a succession of 1.86 s-long stretches of signal sampled at 2200 Hz. This paper describes the underlying theory of the frequency measurement method as well as its application to GP-B's HF science signal.

  20. Resonant magnetic pumping at very low frequency

    International Nuclear Information System (INIS)

    Canobbio, Ernesto

    1978-01-01

    We propose to exploit for plasma heating purposes the very low frequency limit of the Alfven wave resonance condition, which reduces essentially to safety factor q=m/n, a rational number. It is shown that a substantial fraction of the total RF-energy can be absorbed by the plasma. The lowest possible frequency value is determined by the maximum tolerable width of the RF-magnetic islands which develop near the singular surface. The obvious interest of the proposed scheme is the low frequency value (f<=10 KHz) which allows the RF-coils to be protected by stainless steel or even to be put outside the liner

  1. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  2. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  3. Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh

    Science.gov (United States)

    Mortuza, M. R.; Demissie, Y.; Li, H. Y.

    2014-12-01

    Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.

  4. Polymorphism discovery and allele frequency estimation using high-throughput DNA sequencing of target-enriched pooled DNA samples

    Directory of Open Access Journals (Sweden)

    Mullen Michael P

    2012-01-01

    Full Text Available Abstract Background The central role of the somatotrophic axis in animal post-natal growth, development and fertility is well established. Therefore, the identification of genetic variants affecting quantitative traits within this axis is an attractive goal. However, large sample numbers are a pre-requisite for the identification of genetic variants underlying complex traits and although technologies are improving rapidly, high-throughput sequencing of large numbers of complete individual genomes remains prohibitively expensive. Therefore using a pooled DNA approach coupled with target enrichment and high-throughput sequencing, the aim of this study was to identify polymorphisms and estimate allele frequency differences across 83 candidate genes of the somatotrophic axis, in 150 Holstein-Friesian dairy bulls divided into two groups divergent for genetic merit for fertility. Results In total, 4,135 SNPs and 893 indels were identified during the resequencing of the 83 candidate genes. Nineteen percent (n = 952 of variants were located within 5' and 3' UTRs. Seventy-two percent (n = 3,612 were intronic and 9% (n = 464 were exonic, including 65 indels and 236 SNPs resulting in non-synonymous substitutions (NSS. Significant (P ® MassARRAY. No significant differences (P > 0.1 were observed between the two methods for any of the 43 SNPs across both pools (i.e., 86 tests in total. Conclusions The results of the current study support previous findings of the use of DNA sample pooling and high-throughput sequencing as a viable strategy for polymorphism discovery and allele frequency estimation. Using this approach we have characterised the genetic variation within genes of the somatotrophic axis and related pathways, central to mammalian post-natal growth and development and subsequent lactogenesis and fertility. We have identified a large number of variants segregating at significantly different frequencies between cattle groups divergent for calving

  5. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  6. High-frequency harmonic imaging of the eye

    Science.gov (United States)

    Silverman, Ronald H.; Coleman, D. Jackson; Ketterling, Jeffrey A.; Lizzi, Frederic L.

    2005-04-01

    Purpose: Harmonic imaging has become a well-established technique for ultrasonic imaging at fundamental frequencies of 10 MHz or less. Ophthalmology has benefited from the use of fundamentals of 20 MHz to 50 MHz. Our aim was to explore the ability to generate harmonics for this frequency range, and to generate harmonic images of the eye. Methods: The presence of harmonics was determined in both water and bovine vitreous propagation media by pulse/echo and hydrophone at a series of increasing excitation pulse intensities and frequencies. Hydrophone measurements were made at the focal point and in the near- and far-fields of 20 MHz and 40 MHz transducers. Harmonic images of the anterior segment of the rabbit eye were obtained by a combination of analog filtering and digital post-processing. Results: Harmonics were generated nearly identically in both water and vitreous. Hydrophone measurements showed the maximum second harmonic to be -5 dB relative to the 35 MHz fundamental at the focus, while in pulse/echo the maximum harmonic amplitude was -15dB relative to the fundamental. Harmonics were absent in the near-field, but present in the far-field. Harmonic images of the eye showed improved resolution. Conclusion: Harmonics can be readily generated at very high frequencies, and at power levels compliant with FDA guidelines for ophthalmology. This technique may yield further improvements to the already impressive resolutions obtainable in this frequency range. Improved imaging of the macular region, in particular, may provide significant improvements in diagnosis of retinal disease.

  7. Accurate frequency measurements on gyrotrons using a ''gyro-radiometer''

    International Nuclear Information System (INIS)

    Rebuffi, L.

    1986-08-01

    Using an heterodyne system, called ''Gyro-radiometer'', accurated frequency measurements have been carried out on VARIAN 60 GHz gyrotrons. Changing the principal tuning parameters of a gyrotron, we have detected frequency variations up to 100 MHz, ∼ 40 MHz frequency jumps and smaller jumps (∼ 10 MHz) when mismatches in the transmission line were present. FWHM bandwidth of 300 KHz, parasitic frequencies and frequency drift during 100 msec pulses have also been observed. An efficient method to find a stable-, high power-, long pulse-working point of a gyrotron loaded by a transmission line, has been derived. In general, for any power value it is possible to find stable working conditions tuning the principal parameters of the tube in correspondance of a maximum of the emitted frequency

  8. Variation in the human lymphocyte sister chromatid exchange frequency as a function of time: results of daily and twice-weekly sampling

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, J.D.; Christensen, M.L.; Strout, C.L.; McGee, K.A.; Carrano, A.V.

    1987-01-01

    The variation in lymphocyte sister chromatid exchange (SCE) frequency was investigated in healthy nonsmokers who were not taking any medication. Two separate studies were undertaken. In the first, blood was drawn from four women twice a week for 8 weeks. These donors recorded the onset and termination of menstruation and times of illness. In the second study, blood was obtained from two women and two men for 5 consecutive days on two separate occasions initiated 14 days apart. Analysis of the mean SCE frequencies in each study indicated that significant temporal variation occurred in each donor, and that more variation occurred in the longer study. Some of the variation was found to be associated with the menstrual cycle. In the daily study, most of the variation appeared to be random, but occasional day-to-day changes occurred that were greater than those expected by chance. To determine how well a single SCE sample estimated the pooled mean for each donor in each study, the authors calculated the number of samples that encompassed that donor's pooled mean within 1 or more standard errors. For both studies, about 75% of the samples encompassed the pooled mean within 2 standard errors. An analysis of high-frequency cells (HFCs) was also undertaken. The results for each study indicate that the proportion of HFCs, compared with the use of Fisher's Exact test, is significantly more constant than the means, which were compared by using the t-test. These results coupled with our previous work suggest that HFC analysis may be the method of choice when analyzing data from human population studies.

  9. Ultra high frequency induction welding of powder metal compacts

    Energy Technology Data Exchange (ETDEWEB)

    Cavdar, U.; Gulsahin, I.

    2014-10-01

    The application of the iron based Powder Metal (PM) compacts in Ultra High Frequency Induction Welding (UHFIW) were reviewed. These PM compacts are used to produce cogs. This study investigates the methods of joining PM materials enforceability with UHFIW in the industry application. Maximum stress and maximum strain of welded PM compacts were determined by three point bending and strength tests. Microhardness and microstructure of induction welded compacts were determined. (Author)

  10. Ultra high frequency induction welding of powder metal compacts

    International Nuclear Information System (INIS)

    Cavdar, U.; Gulsahin, I.

    2014-01-01

    The application of the iron based Powder Metal (PM) compacts in Ultra High Frequency Induction Welding (UHFIW) were reviewed. These PM compacts are used to produce cogs. This study investigates the methods of joining PM materials enforceability with UHFIW in the industry application. Maximum stress and maximum strain of welded PM compacts were determined by three point bending and strength tests. Microhardness and microstructure of induction welded compacts were determined. (Author)

  11. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  12. Application of CRAFT (complete reduction to amplitude frequency table) in nonuniformly sampled (NUS) 2D NMR data processing.

    Science.gov (United States)

    Krishnamurthy, Krish; Hari, Natarajan

    2017-09-15

    The recently published CRAFT (complete reduction to amplitude frequency table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. It offers an alternate approach to decimate time-domain data, with minimal preprocessing step. It has been shown that application of CRAFT technique to process the t 1 dimension of the 2D data significantly improved the detectable resolution by its ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t 1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t 1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t 1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Development of Radio Frequency Antenna Radiation Simulation Software

    International Nuclear Information System (INIS)

    Mohamad Idris Taib; Rozaimah Abd Rahim; Noor Ezati Shuib; Wan Saffiey Wan Abdullah

    2014-01-01

    Antennas are widely used national wide for radio frequency propagation especially for communication system. Radio frequency is electromagnetic spectrum from 10 kHz to 300 GHz and non-ionizing. These radiation exposures to human being have radiation hazard risk. This software was under development using LabVIEW for radio frequency exposure calculation. For the first phase of this development, software purposely to calculate possible maximum exposure for quick base station assessment, using prediction methods. This software also can be used for educational purpose. Some results of this software are comparing with commercial IXUS and free ware NEC software. (author)

  14. Different Frequencies between Power and Efficiency in Wireless Power Transfer

    OpenAIRE

    Muhammad Afnan, Habibi; Hodaka, Ichijo

    2017-01-01

    Wireless Power Transfer (WPT) has been recognized as a common power transfer method because it transfers electric power without any cable from source to the load. One of the physical principle of WPT is the law of electromagnetic induction, and the WPT system is driven by alternative current power source under specific frequency. The frequency that provides maximum gain between voltages or currents is called resonance frequency. On the other hand, some studies about WPT said that resonance fr...

  15. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Science.gov (United States)

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  16. Executive control resources and frequency of fatty food consumption: findings from an age-stratified community sample.

    Science.gov (United States)

    Hall, Peter A

    2012-03-01

    Fatty foods are regarded as highly appetitive, and self-control is often required to resist consumption. Executive control resources (ECRs) are potentially facilitative of self-control efforts, and therefore could predict success in the domain of dietary self-restraint. It is not currently known whether stronger ECRs facilitate resistance to fatty food consumption, and moreover, it is unknown whether such an effect would be stronger in some age groups than others. The purpose of the present study was to examine the association between ECRs and consumption of fatty foods among healthy community-dwelling adults across the adult life span. An age-stratified sample of individuals between 18 and 89 years of age attended two laboratory sessions. During the first session they completed two computer-administered tests of ECRs (Stroop and Go-NoGo) and a test of general cognitive function (Wechsler Abbreviated Scale of Intelligence); participants completed two consecutive 1-week recall measures to assess frequency of fatty and nonfatty food consumption. Regression analyses revealed that stronger ECRs were associated with lower frequency of fatty food consumption over the 2-week interval. This association was observed for both measures of ECR and a composite measure. The effect remained significant after adjustment for demographic variables (age, gender, socioeconomic status), general cognitive function, and body mass index. The observed effect of ECRs on fatty food consumption frequency was invariant across age group, and did not generalize to nonfatty food consumption. ECRs may be potentially important, though understudied, determinants of dietary behavior in adults across the life span.

  17. External-cavity high-power dual-wavelength tapered amplifier with tunable THz frequency difference

    DEFF Research Database (Denmark)

    Chi, Mingjun; Jensen, Ole Bjarlin; Petersen, Paul Michael

    2012-01-01

    A tunable 800 nm high-power dual-wavelength diode laser system with double-Littrow external-cavity feedback is demonstrated. The two wavelengths can be tuned individually, and the frequency difference of the two wavelengths is tunable from 0.5 to 5.0 THz. A maximum output power of 1.54 W is achie......A tunable 800 nm high-power dual-wavelength diode laser system with double-Littrow external-cavity feedback is demonstrated. The two wavelengths can be tuned individually, and the frequency difference of the two wavelengths is tunable from 0.5 to 5.0 THz. A maximum output power of 1.54 W...... is achieved with a frequency difference of 0.86 THz, the output power is higher than 1.3 W in the 5.0 THz range of frequency difference, and the amplified spontaneous emission intensity is more than 20 dB suppressed in the range of frequency difference. The beam quality factor M2 is 1.22±0.15 at an output...

  18. Tunable Twin Matching Frequency (fm1/fm2) Behavior of Ni1-xZnxFe2O4/NBR Composites over 2-12.4 GHz: A Strategic Material System for Stealth Applications

    Science.gov (United States)

    Saini, Lokesh; Patra, Manoj Kumar; Jani, Raj Kumar; Gupta, Goutam Kumar; Dixit, Ambesh; Vadera, Sampat Raj

    2017-03-01

    The gel to carbonate precipitate route has been used for the synthesis of Ni1-xZnxFe2O4 (x = 0, 0.25, 0.5 and 0.75) bulk inverse spinel ferrite powder samples. The optimal zinc (50%) substitution has shown the maximum saturation magnetic moment and resulted into the maximum magnetic loss tangent (tanδm) > -1.2 over the entire 2-10 GHz frequency range with an optimum value ~-1.75 at 6 GHz. Ni0.5Zn0.5Fe2O4- Acrylo-Nitrile Butadiene Rubber (NBR) composite samples are prepared at different weight percentage (wt%) of ferrite loading fractions in rubber for microwave absorption evaluation. The 80 wt% loaded Ni0.5Zn0.5Fe2O4/NBR composite (FMAR80) sample has shown two reflection loss (RL) peaks at 5 and 10 GHz. Interestingly, a single peak at 10 GHz for 3.25 mm thickness, can be scaled down to 5 GHz by increasing the thickness up to 4.6 mm. The onset of such twin matching frequencies in FMAR80 composite sample is attributed to the spin resonance relaxation at ~5 GHz (fm1) and destructive interference at λm/4 matched thickness near ~10 GHz (fm2) in these composite systems. These studies suggest the potential of tuning the twin frequencies in Ni0.5Zn0.5Fe2O4/NBR composite samples for possible microwave absorption applications.

  19. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    Science.gov (United States)

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  20. Implication of the first decision on visual information-sampling in the spatial frequency domain in pulmonary nodule recognition

    Science.gov (United States)

    Pietrzyk, Mariusz W.; Manning, David; Donovan, Tim; Dix, Alan

    2010-02-01

    Aim: To investigate the impact on visual sampling strategy and pulmonary nodule recognition of image-based properties of background locations in dwelled regions where the first overt decision was made. . Background: Recent studies in mammography show that the first overt decision (TP or FP) has an influence on further image reading including the correctness of the following decisions. Furthermore, the correlation between the spatial frequency properties of the local background following decision sites and the first decision correctness has been reported. Methods: Subjects with different radiological experience were eye tracked during detection of pulmonary nodules from PA chest radiographs. Number of outcomes and the overall quality of performance are analysed in terms of the cases where correct or incorrect decisions were made. JAFROC methodology is applied. The spatial frequency properties of selected local backgrounds related to a certain decisions were studied. ANOVA was used to compare the logarithmic values of energy carried by non redundant stationary wavelet packet coefficients. Results: A strong correlation has been found between the number of TP as a first decision and the JAFROC score (r = 0.74). The number of FP as a first decision was found negatively correlated with JAFROC (r = -0.75). Moreover, the differential spatial frequency profiles outcomes depend on the first choice correctness.

  1. Analysis of Time and Frequency Domain Pace Algorithms for OFDM with Virtual Subcarriers

    DEFF Research Database (Denmark)

    Rom, Christian; Manchón, Carles Navarro; Deneire, Luc

    2007-01-01

    This paper studies common linear frequency direction pilot-symbol aided channel estimation algorithms for orthogonal frequency division multiplexing in a UTRA long term evolution context. Three deterministic algorithms are analyzed: the maximum likelihood (ML) approach, the noise reduction algori...

  2. On the frequency and field linewidth conversion of ferromagnetic resonance spectra

    International Nuclear Information System (INIS)

    Wei, Yajun; Svedlindh, Peter; Liang Chin, Shin

    2015-01-01

    Both frequency swept and field swept ferromagnetic resonance measurements have been carried out for a number of different samples with negligible, moderate and significant extrinsic frequency independent linewidth contribution to analyze the correlation between the experimentally measured frequency and field linewidths. Contrary to the belief commonly held by many researchers, it is found that the frequency and field linewidth conversion relation does not hold for all cases. Instead it holds only for samples with negligible frequency independent linewidth contributions. For samples with non-negligible frequency independent linewidth contribution, the field linewidth values converted from the measured frequency linewidth are larger than the experimentally measured field linewidth. A close examination of the literature reveals that previously reported results support our findings, with successful conversions related to samples with negligible frequency independent linewidth contributions and unsuccessful conversions related to samples with significant frequency independent linewidth. The findings are important in providing guidance in ferromagnetic resonance linewidth conversions. (paper)

  3. Regional analysis of annual maximum rainfall using TL-moments method

    Science.gov (United States)

    Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd

    2011-06-01

    Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.

  4. High frequency characterization of Galfenol minor flux density loops

    Directory of Open Access Journals (Sweden)

    Ling Weng

    2017-05-01

    Full Text Available This paper presents the first measurement of ring-shaped Galfenol’s high frequency-dependent minor flux density loops. The frequencies of applied AC magnetic field are 1k, 5k, 10k, 50k, 100k, 200k, 300k, 500 kHz. The measurements show that the cycle area between the flux density and magnetic field curves increase with increasing frequency. High frequency-dependent characterization, including coercivity, specific power loss, residual induction, and maximum relative permeability are discussed. Minor loops for different max induction are also measured and discussed at the same frequency 100 kHz. Minor loops with the same max induction 0.05 T for different frequencies 50, 100, 200, 300, 400 kHz are measured and specific power loss are discussed.

  5. Implementation of linear filters for iterative penalized maximum likelihood SPECT reconstruction

    International Nuclear Information System (INIS)

    Liang, Z.

    1991-01-01

    This paper reports on six low-pass linear filters applied in frequency space implemented for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The filters implemented were the Shepp-Logan filter, the Butterworth filer, the Gaussian filter, the Hann filter, the Parzen filer, and the Lagrange filter. The low-pass filtering was applied in frequency space to projection data for the initial estimate and to the difference of projection data and reprojected data for higher order approximations. The projection data were acquired experimentally from a chest phantom consisting of non-uniform attenuating media. All the filters could effectively remove the noise and edge artifacts associated with ML approach if the frequency cutoff was properly chosen. The improved performance of the Parzen and Lagrange filters relative to the others was observed. The best image, by viewing its profiles in terms of noise-smoothing, edge-sharpening, and contrast, was the one obtained with the Parzen filter. However, the Lagrange filter has the potential to consider the characteristics of detector response function

  6. Effect of slip-area scaling on the earthquake frequency-magnitude relationship

    Science.gov (United States)

    Senatorski, Piotr

    2017-06-01

    The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.

  7. 7 CFR 58.643 - Frequency of sampling.

    Science.gov (United States)

    2010-01-01

    ... AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT (CONTINUED) GRADING AND INSPECTION... each type of mix, and for the finished frozen product one sample from each flavor made. (b) Composition... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards...

  8. Frequency, stability and differentiation of self-reported school fear and truancy in a community sample

    Directory of Open Access Journals (Sweden)

    Metzke Christa

    2008-07-01

    Full Text Available Abstract Background Surprisingly little is known about the frequency, stability, and correlates of school fear and truancy based on self-reported data of adolescents. Methods Self-reported school fear and truancy were studied in a total of N = 834 subjects of the community-based Zurich Adolescent Psychology and Psychopathology Study (ZAPPS at two times with an average age of thirteen and sixteen years. Group definitions were based on two behavioural items of the Youth Self-Report (YSR. Comparisons included a control group without indicators of school fear or truancy. The three groups were compared across questionnaires measuring emotional and behavioural problems, life-events, self-related cognitions, perceived parental behaviour, and perceived school environment. Results The frequency of self-reported school fear decreased over time (6.9 vs. 3.6% whereas there was an increase in truancy (5.0 vs. 18.4%. Subjects with school fear displayed a pattern of associated internalizing problems and truants were characterized by associated delinquent behaviour. Among other associated psychosocial features, the distress coming from the perceived school environment in students with school fear is most noteworthy. Conclusion These findings from a community study show that school fear and truancy are frequent and display different developmental trajectories. Furthermore, previous results are corroborated which are based on smaller and selected clinical samples indicating that the two groups display distinct types of school-related behaviour.

  9. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  10. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  11. Psychophysical basis for maximum pushing and pulling forces: A review and recommendations.

    Science.gov (United States)

    Garg, Arun; Waters, Thomas; Kapellusch, Jay; Karwowski, Waldemar

    2014-03-01

    The objective of this paper was to perform a comprehensive review of psychophysically determined maximum acceptable pushing and pulling forces. Factors affecting pushing and pulling forces are identified and discussed. Recent studies show a significant decrease (compared to previous studies) in maximum acceptable forces for males but not for females when pushing and pulling on a treadmill. A comparison of pushing and pulling forces measured using a high inertia cart with those measured on a treadmill shows that the pushing and pulling forces using high inertia cart are higher for males but are about the same for females. It is concluded that the recommendations of Snook and Ciriello (1991) for pushing and pulling forces are still valid and provide reasonable recommendations for ergonomics practitioners. Regression equations as a function of handle height, frequency of exertion and pushing/pulling distance are provided to estimate maximum initial and sustained forces for pushing and pulling acceptable to 75% male and female workers. At present it is not clear whether pushing or pulling should be favored. Similarly, it is not clear what handle heights would be optimal for pushing and pulling. Epidemiological studies are needed to determine relationships between psychophysically determined maximum acceptable pushing and pulling forces and risk of musculoskeletal injuries, in particular to low back and shoulders.

  12. Wind power limit calculation basedon frequency deviation using Matlab

    International Nuclear Information System (INIS)

    Santos Fuentefria, Ariel; Salgado Duarte, Yorlandis; MejutoFarray, Davis

    2017-01-01

    The utilization of the wind energy for the production of electricity it’s a technology that has promoted itself in the last years, like an alternative before the environmental deterioration and the scarcity of the fossil fuels. When the power generation of wind energy is integrated into the electrical power systems, maybe take place problems in the frequency stability due to, mainly, the stochastic characteristic of the wind and the impossibility of the wind power control on behalf of the dispatchers. In this work, is make an analysis of frequency deviation when the wind power generation rise in an isolated electrical power system. This analysis develops in a computerized frame with the construction of an algorithm using Matlab, which allowed to make several simulations in order to obtain the frequency behavior for different loads and wind power conditions. Besides, it was determined the wind power limit for minimum, medium and maximum load. The results show that the greatest values on wind power are obtained in maximum load condition. However, the minimum load condition limit the introduction of wind power into the system. (author)

  13. Quantification of Uncertainty in the Flood Frequency Analysis

    Science.gov (United States)

    Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.

    2017-12-01

    Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.

  14. Tunable Twin Matching Frequency (fm1/fm2) Behavior of Ni1−xZnxFe2O4/NBR Composites over 2–12.4 GHz: A Strategic Material System for Stealth Applications

    Science.gov (United States)

    Saini, Lokesh; Patra, Manoj Kumar; Jani, Raj Kumar; Gupta, Goutam Kumar; Dixit, Ambesh; Vadera, Sampat Raj

    2017-01-01

    The gel to carbonate precipitate route has been used for the synthesis of Ni1−xZnxFe2O4 (x = 0, 0.25, 0.5 and 0.75) bulk inverse spinel ferrite powder samples. The optimal zinc (50%) substitution has shown the maximum saturation magnetic moment and resulted into the maximum magnetic loss tangent (tanδm) > −1.2 over the entire 2–10 GHz frequency range with an optimum value ~−1.75 at 6 GHz. Ni0.5Zn0.5Fe2O4- Acrylo-Nitrile Butadiene Rubber (NBR) composite samples are prepared at different weight percentage (wt%) of ferrite loading fractions in rubber for microwave absorption evaluation. The 80 wt% loaded Ni0.5Zn0.5Fe2O4/NBR composite (FMAR80) sample has shown two reflection loss (RL) peaks at 5 and 10 GHz. Interestingly, a single peak at 10 GHz for 3.25 mm thickness, can be scaled down to 5 GHz by increasing the thickness up to 4.6 mm. The onset of such twin matching frequencies in FMAR80 composite sample is attributed to the spin resonance relaxation at ~5 GHz (fm1) and destructive interference at λm/4 matched thickness near ~10 GHz (fm2) in these composite systems. These studies suggest the potential of tuning the twin frequencies in Ni0.5Zn0.5Fe2O4/NBR composite samples for possible microwave absorption applications. PMID:28294151

  15. Judging Criterion of Controlled Structures with Closely Spaced Natural Frequencies

    International Nuclear Information System (INIS)

    Xie Faxiang; Sun Limin

    2010-01-01

    The structures with closely spaced natural frequencies widely exist in civil engineering; however, the judging criterion of the density of closely spaced frequencies is in dispute. This paper suggests a judging criterion for structures with closely spaced natural frequencies based on the analysis on a controlled 2-DOF structure. The analysis results indicate that the optimal control gain of the structure with velocity feedback is dependent on the frequency density parameter of structure and the maximum attainable additional modal damping ratio is 1.72 times of the frequency density parameter when state feedback is applied. Based on a brief review on the previous researches, a judging criterion related the minimum frequency density parameter and the required mode damping ratio was proposed.

  16. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  17. Decomposition of methane to hydrogen using nanosecond pulsed plasma reactor with different active volumes, voltages and frequencies

    International Nuclear Information System (INIS)

    Khalifeh, Omid; Mosallanejad, Amin; Taghvaei, Hamed; Rahimpour, Mohammad Reza; Shariati, Alireza

    2016-01-01

    Highlights: • CH 4 conversion into H 2 is investigated in a nanosecond pulsed DBD reactor. • The absence of CO and CO 2 in the product gas is highly favorable. • Effects of external electrode length, applied voltage and frequency are examined. • The maximum efficiency of 7.23% is achieved at the electrode length of 15 cm. • The maximum CH 4 conversion of 87.2% is obtained at discharge power 268.92 W. - Abstract: In this paper, the methane conversion into hydrogen is investigated experimentally in a nanosecond pulsed DBD reactor. In order to achieve pure hydrogen production with minimum power consumption, effects of some operating parameters including external electrode length, applied voltage and pulse repetition frequency have been evaluated. Results show that although higher CH 4 conversion and H 2 concentration can be obtained at longer electrode lengths, higher applied voltages and pulse repetition frequencies, these parameters should be optimized for efficient hydrogen production. Actually, the maximum CH 4 conversion of 87.2% and maximum hydrogen percentage of 80% are obtained at the external electrode length, discharge power, voltage and frequency of 15 cm, 268.92 W, 12 kV and 10 kHz, respectively. However, the maximum efficiency of 7.23% is achieved at the external electrode length of 15 cm, applied voltage of 6 kV, pulse repetition frequency of 0.9 kHz and discharge power of 4 W. Furthermore, at this condition, due to low temperature of discharge zone very little amount of solid carbon was observed on the inner electrode surface of the reactor.

  18. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    Science.gov (United States)

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. High frequency electromagnetic reflection loss performance of substituted Sr-hexaferrite nanoparticles/SWCNTs/epoxy nanocomposite

    Science.gov (United States)

    Gordani, Gholam Reza; Ghasemi, Ali; saidi, Ali

    2015-10-01

    In this study, the electromagnetic properties of a novel nanocomposite material made of substituted Sr-hexaferrite nanoparticles and different percentage of single walled carbon nanotube have been studied. The structural, magnetic and electromagnetic properties of samples were studied as a function of volume percentage of SWCNTs by X-ray diffraction, Fourier transform infrared spectroscopy, scanning electron microscopy, transmission electron microscopy, vibrating sample magnetometer and vector network analysis. Well suitable crystallinity of hexaferrite nanoparticles was confirmed by XRD patterns. TEM and FESEM micrographs were shown the good homogenity and high level of dispersivity of SWCNTs and Sr-hexaferrite nanoparticles in nanocomposite samples. The VSM results shown that with increasing in amount of CNTs (0-6 vol%), the saturation of magnetization decreased up to 11 emu/g for nanocomposite sample contains of 6 vol% of SWCNTs. The vector network analysis results show that the maximum value of reflection loss was -36.4 dB at the frequency of 11 GHz with an absorption bandwidth of more than 4 GHz (nanocomposite material with appropriate amount of SWCNTs hold great promise for microwave device applications.

  20. Versatile mid-infrared frequency-comb referenced sub-Doppler spectrometer

    Science.gov (United States)

    Gambetta, A.; Vicentini, E.; Coluccelli, N.; Wang, Y.; Fernandez, T. T.; Maddaloni, P.; De Natale, P.; Castrillo, A.; Gianfrani, L.; Laporta, P.; Galzerano, G.

    2018-04-01

    We present a mid-IR high-precision spectrometer capable of performing accurate Doppler-free measurements with absolute calibration of the optical axis and high signal-to-noise ratio. The system is based on a widely tunable mid-IR offset-free frequency comb and a Quantum-Cascade-Laser (QCL). The QCL emission frequency is offset locked to one of the comb teeth to provide absolute-frequency calibration, spectral-narrowing, and accurate fine frequency tuning. Both the comb repetition frequency and QCL-comb offset frequency can be modulated to provide, respectively, slow- and fast-frequency-calibrated scanning capabilities. The characterisation of the spectrometer is demonstrated by recording sub-Doppler saturated absorption features of the CHF3 molecule at around 8.6 μm with a maximum signal-to-noise ratio of ˜7 × 103 in 10 s integration time, frequency-resolution of 160 kHz, and accuracy of less than 10 kHz.

  1. Versatile mid-infrared frequency-comb referenced sub-Doppler spectrometer

    Directory of Open Access Journals (Sweden)

    A. Gambetta

    2018-04-01

    Full Text Available We present a mid-IR high-precision spectrometer capable of performing accurate Doppler-free measurements with absolute calibration of the optical axis and high signal-to-noise ratio. The system is based on a widely tunable mid-IR offset-free frequency comb and a Quantum-Cascade-Laser (QCL. The QCL emission frequency is offset locked to one of the comb teeth to provide absolute-frequency calibration, spectral-narrowing, and accurate fine frequency tuning. Both the comb repetition frequency and QCL-comb offset frequency can be modulated to provide, respectively, slow- and fast-frequency-calibrated scanning capabilities. The characterisation of the spectrometer is demonstrated by recording sub-Doppler saturated absorption features of the CHF3 molecule at around 8.6 μm with a maximum signal-to-noise ratio of ∼7 × 103 in 10 s integration time, frequency-resolution of 160 kHz, and accuracy of less than 10 kHz.

  2. Maximum likelihood sequence estimation for optical complex direct modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  3. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  4. High precision synchronization of time and frequency and its applications

    International Nuclear Information System (INIS)

    Wang Lijun

    2014-01-01

    We discuss the concept and methods for remote synchronization of time and frequency. We discuss a recent experiment that demonstrated time and frequency synchronization via a commercial fiber network, reaching accuracy of 7 × 10 -15 /s, 5 × 10 -19 /day, and a maximum time uncertainty of less than 50 femtoseconds. We discuss synchronization methods applicable to different topologies and their important scientific applications. (authors)

  5. Downsampling Non-Uniformly Sampled Data

    Directory of Open Access Journals (Sweden)

    Fredrik Gustafsson

    2007-10-01

    Full Text Available Decimating a uniformly sampled signal a factor D involves low-pass antialias filtering with normalized cutoff frequency 1/D followed by picking out every Dth sample. Alternatively, decimation can be done in the frequency domain using the fast Fourier transform (FFT algorithm, after zero-padding the signal and truncating the FFT. We outline three approaches to decimate non-uniformly sampled signals, which are all based on interpolation. The interpolation is done in different domains, and the inter-sample behavior does not need to be known. The first one interpolates the signal to a uniformly sampling, after which standard decimation can be applied. The second one interpolates a continuous-time convolution integral, that implements the antialias filter, after which every Dth sample can be picked out. The third frequency domain approach computes an approximate Fourier transform, after which truncation and IFFT give the desired result. Simulations indicate that the second approach is particularly useful. A thorough analysis is therefore performed for this case, using the assumption that the non-uniformly distributed sampling instants are generated by a stochastic process.

  6. A Dictionary of Basic Pashto Frequency List I, Project Description and Samples, and Frequency List II.

    Science.gov (United States)

    Heston, Wilma

    The three-volume set of materials describes and presents the results to date of a federally-funded project to develop Pashto-English and English-Pashto dictionaries. The goal was to produce a list of 12,000 basic Pashto words for English-speaking users. Words were selected based on frequency in various kinds of oral and written materials, and were…

  7. Tangent hyperbolic circular frequency diverse array radars

    Directory of Open Access Journals (Sweden)

    Sarah Saeed

    2016-03-01

    Full Text Available Frequency diverse array (FDA with uniform frequency offset (UFO has been in spot light of research for past few years. Not much attention has been devoted to non-UFOs in FDA. This study investigates tangent hyperbolic (TH function for frequency offset selection scheme in circular FDAs (CFDAs. Investigation reveals a three-dimensional single-maximum beampattern, which promises to enhance system detection capability and signal-to-interference plus noise ratio. Furthermore, by utilising the versatility of TH function, a highly configurable type array system is achieved, where beampatterns of three different configurations of FDA can be generated, just by adjusting a single function parameter. This study further examines the utility of the proposed TH-CFDA in some practical radar scenarios.

  8. Dual frequency modulation with two cantilevers in series: a possible means to rapidly acquire tip–sample interaction force curves with dynamic AFM

    International Nuclear Information System (INIS)

    Solares, Santiago D; Chawla, Gaurav

    2008-01-01

    One common application of atomic force microscopy (AFM) is the acquisition of tip–sample interaction force curves. However, this can be a slow process when the user is interested in studying non-uniform samples, because existing contact- and dynamic-mode methods require that the measurement be performed at one fixed surface point at a time. This paper proposes an AFM method based on dual frequency modulation using two cantilevers in series, which could be used to measure the tip–sample interaction force curves and topography of the entire sample with a single surface scan, in a time that is comparable to the time needed to collect a topographic image with current AFM imaging modes. Numerical simulation results are provided along with recommended parameters to characterize tip–sample interactions resembling those of conventional silicon tips and carbon nanotube tips tapping on silicon surfaces

  9. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  10. The non-equilibrium response of a superconductor to pair-breaking radiation measured over a broad frequency band

    Energy Technology Data Exchange (ETDEWEB)

    Visser, P. J. de, E-mail: p.j.devisser@tudelft.nl [Kavli Institute of NanoScience, Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft (Netherlands); Yates, S. J. C. [SRON Netherlands Institute for Space Research, Landleven 12, 9747AD Groningen (Netherlands); Guruswamy, T.; Goldie, D. J.; Withington, S. [Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Neto, A.; Llombart, N. [Faculty of Electrical Engineering, Mathematics and Computer Science, Terahertz Sensing Group, Delft University of Technology, Mekelweg 4, 2628 CD Delft (Netherlands); Baryshev, A. M. [SRON Netherlands Institute for Space Research, Landleven 12, 9747AD Groningen (Netherlands); Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen (Netherlands); Klapwijk, T. M. [Kavli Institute of NanoScience, Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft (Netherlands); Physics Department, Moscow State Pedagogical University, Moscow 119991 (Russian Federation); Baselmans, J. J. A. [SRON Netherlands Institute for Space Research, Sorbonnelaan 2, 3584 CA Utrecht (Netherlands); Faculty of Electrical Engineering, Mathematics and Computer Science, Terahertz Sensing Group, Delft University of Technology, Mekelweg 4, 2628 CD Delft (Netherlands)

    2015-06-22

    We have measured the absorption of terahertz radiation in a BCS superconductor over a broad range of frequencies from 200 GHz to 1.1 THz, using a broadband antenna-lens system and a tantalum microwave resonator. From low frequencies, the response of the resonator rises rapidly to a maximum at the gap edge of the superconductor. From there on, the response drops to half the maximum response at twice the pair-breaking energy. At higher frequencies, the response rises again due to trapping of pair-breaking phonons in the superconductor. In practice, this is a measurement of the frequency dependence of the quasiparticle creation efficiency due to pair-breaking in a superconductor. The efficiency, calculated from the different non-equilibrium quasiparticle distribution functions at each frequency, is in agreement with the measurements.

  11. The non-equilibrium response of a superconductor to pair-breaking radiation measured over a broad frequency band

    International Nuclear Information System (INIS)

    Visser, P. J. de; Yates, S. J. C.; Guruswamy, T.; Goldie, D. J.; Withington, S.; Neto, A.; Llombart, N.; Baryshev, A. M.; Klapwijk, T. M.; Baselmans, J. J. A.

    2015-01-01

    We have measured the absorption of terahertz radiation in a BCS superconductor over a broad range of frequencies from 200 GHz to 1.1 THz, using a broadband antenna-lens system and a tantalum microwave resonator. From low frequencies, the response of the resonator rises rapidly to a maximum at the gap edge of the superconductor. From there on, the response drops to half the maximum response at twice the pair-breaking energy. At higher frequencies, the response rises again due to trapping of pair-breaking phonons in the superconductor. In practice, this is a measurement of the frequency dependence of the quasiparticle creation efficiency due to pair-breaking in a superconductor. The efficiency, calculated from the different non-equilibrium quasiparticle distribution functions at each frequency, is in agreement with the measurements

  12. Regional maximum rainfall analysis using L-moments at the Titicaca Lake drainage, Peru

    Science.gov (United States)

    Fernández-Palomino, Carlos Antonio; Lavado-Casimiro, Waldo Sven

    2017-08-01

    The present study investigates the application of the index flood L-moments-based regional frequency analysis procedure (RFA-LM) to the annual maximum 24-h rainfall (AM) of 33 rainfall gauge stations (RGs) to estimate rainfall quantiles at the Titicaca Lake drainage (TL). The study region was chosen because it is characterised by common floods that affect agricultural production and infrastructure. First, detailed quality analyses and verification of the RFA-LM assumptions were conducted. For this purpose, different tests for outlier verification, homogeneity, stationarity, and serial independence were employed. Then, the application of RFA-LM procedure allowed us to consider the TL as a single, hydrologically homogeneous region, in terms of its maximum rainfall frequency. That is, this region can be modelled by a generalised normal (GNO) distribution, chosen according to the Z test for goodness-of-fit, L-moments (LM) ratio diagram, and an additional evaluation of the precision of the regional growth curve. Due to the low density of RG in the TL, it was important to produce maps of the AM design quantiles estimated using RFA-LM. Therefore, the ordinary Kriging interpolation (OK) technique was used. These maps will be a useful tool for determining the different AM quantiles at any point of interest for hydrologists in the region.

  13. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  14. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Systems and methods for self-synchronized digital sampling

    Science.gov (United States)

    Samson, Jr., John R. (Inventor)

    2008-01-01

    Systems and methods for self-synchronized data sampling are provided. In one embodiment, a system for capturing synchronous data samples is provided. The system includes an analog to digital converter adapted to capture signals from one or more sensors and convert the signals into a stream of digital data samples at a sampling frequency determined by a sampling control signal; and a synchronizer coupled to the analog to digital converter and adapted to receive a rotational frequency signal from a rotating machine, wherein the synchronizer is further adapted to generate the sampling control signal, and wherein the sampling control signal is based on the rotational frequency signal.

  16. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  17. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  18. A simple frequency-scaling rule for animal communication

    Science.gov (United States)

    Fletcher, Neville H.

    2004-05-01

    Different animals use widely different frequencies for sound communication, and it is reasonable to assume that evolution has adapted these frequencies to give greatest conspecific communication distance for a given vocal effort. Acoustic analysis shows that the optimal communication frequency is inversely proportional to about the 0.4 power of the animal's body mass. Comparison with observational data indicates that this prediction is well supported in practice. For animals of a given class, for example mammals, the maximum communication distance varies about as the 0.6 power of the animal's mass. There is, however, a wide spread of observed results because of the different emphasis placed upon vocal effort in the evolution of different animal species.

  19. Narrow-band 1, 2, 3, 4, 8, 16 and 24 cycles/360o angular frequency filters

    Directory of Open Access Journals (Sweden)

    Simas M.L.B.

    2002-01-01

    Full Text Available We measured human frequency response functions for seven angular frequency filters whose test frequencies were centered at 1, 2, 3, 4, 8, 16 or 24 cycles/360º using a supra-threshold summation method. The seven functions of 17 experimental conditions each were measured nine times for five observers. For the arbitrarily selected filter phases, the maximum summation effect occurred at test frequency for filters at 1, 2, 3, 4 and 8 cycles/360º. For both 16 and 24 cycles/360º test frequencies, maximum summation occurred at the lower harmonics. These results allow us to conclude that there are narrow-band angular frequency filters operating somehow in the human visual system either through summation or inhibition of specific frequency ranges. Furthermore, as a general result, it appears that addition of higher angular frequencies to lower ones disturbs low angular frequency perception (i.e., 1, 2, 3 and 4 cycles/360º, whereas addition of lower harmonics to higher ones seems to improve detection of high angular frequency harmonics (i.e., 8, 16 and 24 cycles/360º. Finally, we discuss the possible involvement of coupled radial and angular frequency filters in face perception using an example where narrow-band low angular frequency filters could have a major role.

  20. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  1. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  2. Differences in Orgasm Frequency Among Gay, Lesbian, Bisexual, and Heterosexual Men and Women in a U.S. National Sample.

    Science.gov (United States)

    Frederick, David A; John, H Kate St; Garcia, Justin R; Lloyd, Elisabeth A

    2018-01-01

    There is a notable gap between heterosexual men and women in frequency of orgasm during sex. Little is known, however, about sexual orientation differences in orgasm frequency. We examined how over 30 different traits or behaviors were associated with frequency of orgasm when sexually intimate during the past month. We analyzed a large US sample of adults (N = 52,588) who identified as heterosexual men (n = 26,032), gay men (n = 452), bisexual men (n = 550), lesbian women (n = 340), bisexual women (n = 1112), and heterosexual women (n = 24,102). Heterosexual men were most likely to say they usually-always orgasmed when sexually intimate (95%), followed by gay men (89%), bisexual men (88%), lesbian women (86%), bisexual women (66%), and heterosexual women (65%). Compared to women who orgasmed less frequently, women who orgasmed more frequently were more likely to: receive more oral sex, have longer duration of last sex, be more satisfied with their relationship, ask for what they want in bed, praise their partner for something they did in bed, call/email to tease about doing something sexual, wear sexy lingerie, try new sexual positions, anal stimulation, act out fantasies, incorporate sexy talk, and express love during sex. Women were more likely to orgasm if their last sexual encounter included deep kissing, manual genital stimulation, and/or oral sex in addition to vaginal intercourse. We consider sociocultural and evolutionary explanations for these orgasm gaps. The results suggest a variety of behaviors couples can try to increase orgasm frequency.

  3. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  4. Comparsion of maximum viscosity and viscometric method for identification of irradiated sweet potato starch

    International Nuclear Information System (INIS)

    Yi, Sang Duk; Yang, Jae Seung

    2000-01-01

    A study was carried out to compare viscosity and maximum viscosity methods for the detection of irradiated sweet potato starch. The viscosity of all samples decreased by increasing stirring speeds and irradiation doses. This trend was similar for maximum viscosity. Regression coefficients and expressions of viscosity and maximum viscosity with increasing irradiation dose were 0.9823 (y=335.02e -0. 3 366x ) at 120 rpm and 0.9939 (y =-42.544x+730.26). This trend in viscosity was similar for all stirring speeds. Parameter A, B and C values showed a dose dependent relation and were a better parameter for detecting irradiation treatment than maximum viscosity and the viscosity value it self. These results suggest that the detection of irradiated sweet potato starch is possible by both the viscometric and maximum visosity method. Therefore, the authors think that the maximum viscosity method can be proposed as one of the new methods to detect the irradiation treatment for sweet potato starch

  5. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  6. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  7. Power system frequency estimation based on an orthogonal decomposition method

    Science.gov (United States)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  8. Breakfast frequency among adolescents

    DEFF Research Database (Denmark)

    Pedersen, Trine Pagh; Holstein, Bjørn E; Damsgaard, Mogens Trab

    2016-01-01

    OBJECTIVE: To investigate (i) associations between adolescents' frequency of breakfast and family functioning (close relations to parents, quality of family communication and family support) and (ii) if any observed associations between breakfast frequency and family functioning vary...... (n 3054) from a random sample of forty-one schools. RESULTS: Nearly one-quarter of the adolescents had low breakfast frequency. Low breakfast frequency was associated with low family functioning measured by three dimensions. The OR (95 % CI) of low breakfast frequency was 1·81 (1·40, 2......·33) for adolescents who reported no close relations to parents, 2·28 (1·61, 3·22) for adolescents who reported low level of quality of family communication and 2·09 (1·39, 3·15) for adolescents who reported low level of family support. Joint effect analyses suggested that the odds of low breakfast frequency among...

  9. Maximum known stages and discharges of New York streams and their annual exceedance probabilities through September 2011

    Science.gov (United States)

    Wall, Gary R.; Murray, Patricia M.; Lumia, Richard; Suro, Thomas P.

    2014-01-01

    Maximum known stages and discharges at 1,400 sites on 796 streams within New York are tabulated. Stage data are reported in feet. Discharges are reported as cubic feet per second and in cubic feet per second per square mile. Drainage areas range from 0.03 to 298,800 square miles; excluding the three sites with larger drainage areas on the St. Lawrence and Niagara Rivers, which drain the Great Lakes, the maximum drainage area is 8,288 square miles (Hudson River at Albany). Most data were obtained from U.S. Geological Survey (USGS) compilations and records, but some were provided by State, local, and other Federal agencies and by private organizations. The stage and discharge information is grouped by major drainage basins and U.S. Geological Survey site number, in downstream order. Site locations and their associated drainage area, period(s) of record, stage and discharge data, and flood-frequency statistics are compiled in a Microsoft Excel spreadsheet. Flood frequencies were derived for 1,238 sites by using methods described in Bulletin 17B (Interagency Advisory Committee on Water Data, 1982), Ries and Crouse (2002), and Lumia and others (2006). Curves that “envelope” maximum discharges within their range of drainage areas were developed for each of six flood-frequency hydrologic regions and for sites on Long Island, as well as for the State of New York; the New York curve was compared with a curve derived from a plot of maximum known discharges throughout the United States. Discharges represented by the national curve range from at least 2.7 to 4.9 times greater than those represented by the New York curve for drainage areas of 1.0 and 1,000 square miles. The relative magnitudes of discharge and runoff in the six hydrologic regions of New York and Long Island suggest the largest known discharges per square mile are in the southern part of western New York and the Catskill Mountain area, and the smallest are on Long Island.

  10. Efficient coding schemes with power allocation using space-time-frequency spreading

    Institute of Scientific and Technical Information of China (English)

    Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao

    2006-01-01

    An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.

  11. Acoustic emission source location in plates using wavelet analysis and cross time frequency spectrum.

    Science.gov (United States)

    Mostafapour, A; Davoodi, S; Ghareaghaji, M

    2014-12-01

    In this study, the theories of wavelet transform and cross-time frequency spectrum (CTFS) are used to locate AE source with frequency-varying wave velocity in plate-type structures. A rectangular array of four sensors is installed on the plate. When an impact is generated by an artificial AE source such as Hsu-Nielsen method of pencil lead breaking (PLB) at any position of the plate, the AE signals will be detected by four sensors at different times. By wavelet packet decomposition, a packet of signals with frequency range of 0.125-0.25MHz is selected. The CTFS is calculated by the short-time Fourier transform of the cross-correlation between considered packets captured by AE sensors. The time delay is calculated when the CTFS reaches the maximum value and the corresponding frequency is extracted per this maximum value. The resulting frequency is used to calculate the group velocity of wave velocity in combination with dispersive curve. The resulted locating error shows the high precision of proposed algorithm. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Characterization and comprehension of corona partial discharge in air under power frequency to very low frequency voltage

    Science.gov (United States)

    Yuanxiang, ZHOU; Zhongliu, ZHOU; Ling, ZHANG; Yunxiao, ZHANG; Yajun, MO; Jiantao, SUN

    2018-05-01

    For the partial discharge test of electrical equipment with large capacitance, the use of low-frequency voltage instead of power frequency voltage can effectively reduce the capacity requirements of test power supply. However, the validity of PD test under low frequency voltage needs to be evaluated. In order to investigate the influence of voltage frequency on corona discharge in the air, the discharge test of the tip-plate electrode under the frequency from 50 to 0.1 Hz is carried out based on the impulse current method. The results show that some of the main features of corona under low frequency do not change. The magnitude of discharge in a positive half cycle is obviously larger than that in a negative cycle. The magnitude of discharge and interval in positive cycle are random, while that in negative cycle are regular. With the decrease of frequency, the inception voltage increases. The variation trend of maximum and average magnitude and repetition rate of the discharge in positive and negative half cycle with the variation of voltage frequency and magnitude is demonstrated, with discussion and interpretation from the aspects of space charge transportation, effective discharge time and transition of discharge modes. There is an obvious difference in the phase resolved pattern of partial discharge and characteristic parameters of discharge patterns between power and low frequency. The experimental results can be the reference for mode identification of partial discharge under low frequency tests. The trend of the measured parameters with the variation of frequency provides more information about the insulation defect than traditional measurements under a single frequency (usually 50 Hz). Also it helps to understand the mechanism of corona discharge with an explanation of the characteristics under different frequencies.

  13. Effect of Solar Radiation on Viscoelastic Properties of Bovine Leather: Temperature and Frequency Scans

    Science.gov (United States)

    Nalyanya, Kallen Mulilo; Rop, Ronald K.; Onyuka, Arthur S.

    2017-04-01

    This work presents both analytical and experimental results of the effect of unfiltered natural solar radiation on the thermal and dynamic mechanical properties of Boran bovine leather at both pickling and tanning stages of preparation. Samples cut from both pickled and tanned pieces of leather of appropriate dimensions were exposed to unfiltered natural solar radiation for time intervals ranging from 0 h (non-irradiated) to 24 h. The temperature of the dynamic mechanical analyzer was equilibrated at 30°C and increased to 240°C at a heating rate of 5°C \\cdot Min^{-1}, while its oscillation frequency varied from 0.1 Hz to 100 Hz. With the help of thermal analysis (TA) control software which analyzes and generates parameter means/averages at temperature/frequency range, the graphs were created by Microsoft Excel 2013 from the means. The viscoelastic properties showed linear frequency dependence within 0.1 Hz to 30 Hz followed by negligible frequency dependence above 30 Hz. Storage modulus (E') and shear stress (σ ) increased with frequency, while loss modulus (E''), complex viscosity (η ^{*}) and dynamic shear viscosity (η) decreased linearly with frequency. The effect of solar radiation was evident as the properties increased initially from 0 h to 6 h of irradiation followed by a steady decline to a minimum at 18 h before a drastic increase to a maximum at 24 h. Hence, tanning industry can consider the time duration of 24 h for sun-drying of leather to enhance the mechanical properties and hence the quality of the leather. At frequencies higher than 30 Hz, the dynamic mechanical properties are independent of the frequency. The frequency of 30 Hz was observed to be a critical value in the behavior in the mechanical properties of bovine hide.

  14. Biogeochemistry of the MAximum TURbidity Zone of Estuaries (MATURE): some conclusions

    NARCIS (Netherlands)

    Herman, P.M.J.; Heip, C.H.R.

    1999-01-01

    In this paper, we give a short overview of the activities and main results of the MAximum TURbidity Zone of Estuaries (MATURE) project. Three estuaries (Elbe, Schelde and Gironde) have been sampled intensively during a joint 1-week campaign in both 1993 and 1994. We introduce the publicly available

  15. Evaluation of a lower-powered analyzer and sampling system for eddy-covariance measurements of nitrous oxide fluxes

    Directory of Open Access Journals (Sweden)

    S. E. Brown

    2018-03-01

    Full Text Available Nitrous oxide (N2O fluxes measured using the eddy-covariance method capture the spatial and temporal heterogeneity of N2O emissions. Most closed-path trace-gas analyzers for eddy-covariance measurements have large-volume, multi-pass absorption cells that necessitate high flow rates for ample frequency response, thus requiring high-power sample pumps. Other sampling system components, including rain caps, filters, dryers, and tubing, can also degrade system frequency response. This field trial tested the performance of a closed-path eddy-covariance system for N2O flux measurements with improvements to use less power while maintaining the frequency response. The new system consists of a thermoelectrically cooled tunable diode laser absorption spectrometer configured to measure both N2O and carbon dioxide (CO2. The system features a relatively small, single-pass sample cell (200 mL that provides good frequency response with a lower-powered pump ( ∼  250 W. A new filterless intake removes particulates from the sample air stream with no additional mixing volume that could degrade frequency response. A single-tube dryer removes water vapour from the sample to avoid the need for density or spectroscopic corrections, while maintaining frequency response. This eddy-covariance system was collocated with a previous tunable diode laser absorption spectrometer model to compare N2O and CO2 flux measurements for two full growing seasons (May 2015 to October 2016 in a fertilized cornfield in Southern Ontario, Canada. Both spectrometers were placed outdoors at the base of the sampling tower, demonstrating ruggedness for a range of environmental conditions (minimum to maximum daily temperature range: −26.1 to 31.6 °C. The new system rarely required maintenance. An in situ frequency-response test demonstrated that the cutoff frequency of the new system was better than the old system (3.5 Hz compared to 2.30 Hz and similar to that of a closed

  16. Evaluation of a lower-powered analyzer and sampling system for eddy-covariance measurements of nitrous oxide fluxes

    Science.gov (United States)

    Brown, Shannon E.; Sargent, Steve; Wagner-Riddle, Claudia

    2018-03-01

    Nitrous oxide (N2O) fluxes measured using the eddy-covariance method capture the spatial and temporal heterogeneity of N2O emissions. Most closed-path trace-gas analyzers for eddy-covariance measurements have large-volume, multi-pass absorption cells that necessitate high flow rates for ample frequency response, thus requiring high-power sample pumps. Other sampling system components, including rain caps, filters, dryers, and tubing, can also degrade system frequency response. This field trial tested the performance of a closed-path eddy-covariance system for N2O flux measurements with improvements to use less power while maintaining the frequency response. The new system consists of a thermoelectrically cooled tunable diode laser absorption spectrometer configured to measure both N2O and carbon dioxide (CO2). The system features a relatively small, single-pass sample cell (200 mL) that provides good frequency response with a lower-powered pump ( ˜ 250 W). A new filterless intake removes particulates from the sample air stream with no additional mixing volume that could degrade frequency response. A single-tube dryer removes water vapour from the sample to avoid the need for density or spectroscopic corrections, while maintaining frequency response. This eddy-covariance system was collocated with a previous tunable diode laser absorption spectrometer model to compare N2O and CO2 flux measurements for two full growing seasons (May 2015 to October 2016) in a fertilized cornfield in Southern Ontario, Canada. Both spectrometers were placed outdoors at the base of the sampling tower, demonstrating ruggedness for a range of environmental conditions (minimum to maximum daily temperature range: -26.1 to 31.6 °C). The new system rarely required maintenance. An in situ frequency-response test demonstrated that the cutoff frequency of the new system was better than the old system (3.5 Hz compared to 2.30 Hz) and similar to that of a closed-path CO2 eddy-covariance system (4

  17. Anomalous frequency-dependent ionic conductivity of lesion-laden human-brain tissue

    Science.gov (United States)

    Emin, David; Akhtari, Massoud; Fallah, Aria; Vinters, Harry V.; Mathern, Gary W.

    2017-10-01

    We study the effect of lesions on our four-electrode measurements of the ionic conductivity of (˜1 cm3) samples of human brain excised from patients undergoing pediatric epilepsy surgery. For most (˜94%) samples, the low-frequency ionic conductivity rises upon increasing the applied frequency. We attributed this behavior to the long-range (˜0.4 mm) diffusion of solvated sodium cations before encountering intrinsic impenetrable blockages such as cell membranes, blood vessels, and cell walls. By contrast, the low-frequency ionic conductivity of some (˜6%) brain-tissue samples falls with increasing applied frequency. We attribute this unusual frequency-dependence to the electric-field induced liberation of sodium cations from traps introduced by the unusually severe pathology observed in samples from these patients. Thus, the anomalous frequency-dependence of the ionic conductivity indicates trap-producing brain lesions.

  18. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  19. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  20. Auto-identification of engine fault acoustic signal through inverse trigonometric instantaneous frequency analysis

    Directory of Open Access Journals (Sweden)

    Dayong Ning

    2016-03-01

    Full Text Available The acoustic signals of internal combustion engines contain valuable information about the condition of engines. These signals can be used to detect incipient faults in engines. However, these signals are complex and composed of a faulty component and other noise signals of background. As such, engine conditions’ characteristics are difficult to extract through wavelet transformation and acoustic emission techniques. In this study, an instantaneous frequency analysis method was proposed. A new time–frequency model was constructed using a fixed amplitude and a variable cycle sine function to fit adjacent points gradually from a time domain signal. The instantaneous frequency corresponds to single value at any time. This study also introduced instantaneous frequency calculation on the basis of an inverse trigonometric fitting method at any time. The mean value of all local maximum values was then considered to identify the engine condition automatically. Results revealed that the mean of local maximum values under faulty conditions differs from the normal mean. An experiment case was also conducted to illustrate the availability of the proposed method. Using the proposed time–frequency model, we can identify engine condition and determine abnormal sound produced by faulty engines.

  1. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  2. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  3. Vibrational compacting of UO{sub 2} samples in the cladding; Vibraciono kompaktiranje uzoraka UO{sub 2} u zastitnoj kosuljici

    Energy Technology Data Exchange (ETDEWEB)

    Ristic, M M [Institute of Nuclear Sciences Vinca, Laboratorija za reaktorske materijale, Beograd (Serbia and Montenegro)

    1962-12-15

    Vibrational compacting was considered as a feasible method for fuel element fabrication. This report describes calibration of the vibrational compacting device. Vibrational compacting of UO{sub 2} was investigated. Obtained densities were not higher than 42% of the theoretical value, i.e. 70% of the possible compacting density. Influence of frequency, acceleration, power and time on the compacted samples was tested. Optimal conditions for UO{sub 2} compacting were as follows: frequency range from 2500 - 4000 Hz; acceleration range from 40 - 100 Hz; maximum power; time of compacting {approx} 5 min. Comparative evaluation of UO{sub 2} and SiO{sub 2} powders was done in order to improve the future development of this method for fabrication of fuel elements.

  4. Sampling procedures and tables

    International Nuclear Information System (INIS)

    Franzkowski, R.

    1980-01-01

    Characteristics, defects, defectives - Sampling by attributes and by variables - Sample versus population - Frequency distributions for the number of defectives or the number of defects in the sample - Operating characteristic curve, producer's risk, consumer's risk - Acceptable quality level AQL - Average outgoing quality AOQ - Standard ISQ 2859 - Fundamentals of sampling by variables for fraction defective. (RW)

  5. High frequency conductivity of hot electrons in carbon nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Amekpewu, M., E-mail: mamek219@gmail.com [Department of Applied Physics, University for Development Studies, Navrongo (Ghana); Mensah, S.Y. [Department of Physics, College of Agriculture and Natural Sciences, U.C.C. (Ghana); Musah, R. [Department of Applied Physics, University for Development Studies, Navrongo (Ghana); Mensah, N.G. [Department of Mathematics, College of Agriculture and Natural Sciences, U.C.C. (Ghana); Abukari, S.S.; Dompreh, K.A. [Department of Physics, College of Agriculture and Natural Sciences, U.C.C. (Ghana)

    2016-05-01

    High frequency conductivity of hot electrons in undoped single walled achiral Carbon Nanotubes (CNTs) under the influence of ac–dc driven fields was considered. We investigated semi-classically Boltzmann's transport equation with and without the presence of the hot electrons’ source by deriving the current densities in CNTs. Plots of the normalized current density versus frequency of ac-field revealed an increase in both the minimum and maximum peaks of normalized current density at lower frequencies as a result of a strong injection of hot electrons. The applied ac-field plays a twofold role of suppressing the space-charge instability in CNTs and simultaneously pumping an energy for lower frequency generation and amplification of THz radiations. These have enormous promising applications in very different areas of science and technology.

  6. Frequency-Modulation Correlation Spectrometer

    Science.gov (United States)

    Margolis, J. S.; Martonchik, J. V.

    1985-01-01

    New type of correlation spectrometer eliminates need to shift between two cells, one empty and one containing reference gas. Electrooptical phase modulator sinusoidally shift frequencies of sample transmission spectrum.

  7. On the undesired frequency chirping in photonic time-stretch systems

    Science.gov (United States)

    Xu, Yuxiao; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-12-01

    The technique of photonic time stretch (PTS) has been intensively investigated in the past decade due to its potential in the acquisition of ultra-high speed signals. The frequency-related RF power fading in the PTS systems with double sideband (DSB) modulation has been well-known, which limits the maximum modulation frequency. Some solutions have been proposed to solve this problem. In this paper, we report another effect, i.e., undesired frequency chirping, which also relates to the performance degradation of PTS systems with DSB modulation, for the first time to our knowledge. Distinct from the nonlinearities caused by nonlinear modulation and square-law photodetection, which is common in radio frequency analog optical links, this frequency chirping originates from the addition of two beating signals with a relative delay after photodetection. A theoretical model for exactly describing the frequency chirping is presented, and is then verified by simulations. Discussion on the method to avoid the frequency chirping is also presented.

  8. Frequency mixing in boron carbide laser ablation plasmas

    Science.gov (United States)

    Oujja, M.; Benítez-Cañete, A.; Sanz, M.; Lopez-Quintas, I.; Martín, M.; de Nalda, R.; Castillejo, M.

    2015-05-01

    Nonlinear frequency mixing induced by a bichromatic field (1064 nm + 532 nm obtained from a Q-switched Nd:YAG laser) in a boron carbide (B4C) plasma generated through laser ablation under vacuum is explored. A UV beam at the frequency of the fourth harmonic of the fundamental frequency (266 nm) was generated. The dependence of the efficiency of the process as function of the intensities of the driving lasers differs from the expected behavior for four-wave mixing, and point toward a six-wave mixing process. The frequency mixing process was strongly favored for parallel polarizations of the two driving beams. Through spatiotemporal mapping, the conditions for maximum efficiency were found for a significant delay from the ablation event (200 ns), when the medium is expected to be a low-ionized plasma. No late components of the harmonic signal were detected, indicating a largely atomized medium.

  9. Admittance of multiterminal quantum Hall conductors at kilohertz frequencies

    International Nuclear Information System (INIS)

    Hernández, C.; Consejo, C.; Chaubet, C.; Degiovanni, P.

    2014-01-01

    We present an experimental study of the low frequency admittance of quantum Hall conductors in the [100 Hz, 1 MHz] frequency range. We show that the frequency dependence of the admittance of the sample strongly depends on the topology of the contacts connections. Our experimental results are well explained within the Christen and Büttiker approach for finite frequency transport in quantum Hall edge channels taking into account the influence of the coaxial cables capacitance. In the Hall bar geometry, we demonstrate that there exists a configuration in which the cable capacitance does not influence the admittance measurement of the sample. In this case, we measure the electrochemical capacitance of the sample and observe its dependence on the filling factor

  10. Admittance of multiterminal quantum Hall conductors at kilohertz frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Hernández, C. [Departamento de Física, Universidad Militar Nueva Granada, Carrera 11 101-80 Bogotá D.C. (Colombia); Consejo, C.; Chaubet, C., E-mail: christophe.chaubet@univ-montp2.fr [Université Montpellier 2, Laboratoire Charles Coulomb UMR5221, F-34095 Montpellier, France and CNRS, Laboratoire Charles Coulomb UMR5221, F-34095 Montpellier (France); Degiovanni, P. [Université de Lyon, Fédération de Physique Andrée Marie Ampère, CNRS, Laboratoire de Physique de l' Ecole Normale Supérieure de Lyon, 46 allée d' Italie, 69364 Lyon Cedex 07 (France)

    2014-03-28

    We present an experimental study of the low frequency admittance of quantum Hall conductors in the [100 Hz, 1 MHz] frequency range. We show that the frequency dependence of the admittance of the sample strongly depends on the topology of the contacts connections. Our experimental results are well explained within the Christen and Büttiker approach for finite frequency transport in quantum Hall edge channels taking into account the influence of the coaxial cables capacitance. In the Hall bar geometry, we demonstrate that there exists a configuration in which the cable capacitance does not influence the admittance measurement of the sample. In this case, we measure the electrochemical capacitance of the sample and observe its dependence on the filling factor.

  11. Polymorphisms in the Innate Immune IFIH1 Gene, Frequency of Enterovirus in Monthly Fecal Samples during Infancy, and Islet Autoimmunity

    Science.gov (United States)

    Witsø, Elisabet; Tapia, German; Cinek, Ondrej; Pociot, Flemming Michael; Stene, Lars C.; Rønningen, Kjersti S.

    2011-01-01

    Interferon induced with helicase C domain 1 (IFIH1) senses and initiates antiviral activity against enteroviruses. Genetic variants of IFIH1, one common and four rare SNPs have been associated with lower risk for type 1 diabetes. Our aim was to test whether these type 1 diabetes-associated IFIH1 polymorphisms are associated with the occurrence of enterovirus infection in the gut of healthy children, or influence the lack of association between gut enterovirus infection and islet autoimmunity. After testing of 46,939 Norwegian newborns, 421 children carrying the high risk genotype for type 1 diabetes (HLA-DR4-DQ8/DR3-DQ2) as well as 375 children without this genotype were included for monthly fecal collections from 3 to 35 months of age, and genotyped for the IFIH1 polymorphisms. A total of 7,793 fecal samples were tested for presence of enterovirus RNA using real time reverse transcriptase PCR. We found no association with frequency of enterovirus in the gut for the common IFIH1 polymorphism rs1990760, or either of the rare variants of rs35744605, rs35667974, rs35337543, while the enterovirus prevalence marginally differed in samples from the 8 carriers of a rare allele of rs35732034 (26.1%, 18/69 samples) as compared to wild-type homozygotes (12.4%, 955/7724 samples); odds ratio 2.5, p = 0.06. The association was stronger when infections were restricted to those with high viral loads (odds ratio 3.3, 95% CI 1.3–8.4, p = 0.01). The lack of association between enterovirus frequency and islet autoimmunity reported in our previous study was not materially influenced by the IFIH1 SNPs. We conclude that the type 1 diabetes-associated IFIH1 polymorphisms have no, or only minor influence on the occurrence, quantity or duration of enterovirus infection in the gut. Its effect on the risk of diabetes is likely to lie elsewhere in the pathogenic process than in the modification of gut infection. PMID:22110759

  12. Constraints on pulsar masses from the maximum observed glitch

    Science.gov (United States)

    Pizzochero, P. M.; Antonelli, M.; Haskell, B.; Seveso, S.

    2017-07-01

    Neutron stars are unique cosmic laboratories in which fundamental physics can be probed in extreme conditions not accessible to terrestrial experiments. In particular, the precise timing of rotating magnetized neutron stars (pulsars) reveals sudden jumps in rotational frequency in these otherwise steadily spinning-down objects. These 'glitches' are thought to be due to the presence of a superfluid component in the star, and offer a unique glimpse into the interior physics of neutron stars. In this paper we propose an innovative method to constrain the mass of glitching pulsars, using observations of the maximum glitch observed in a star, together with state-of-the-art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. We study the properties of a physically consistent angular momentum reservoir of pinned vorticity, and we find a general inverse relation between the size of the maximum glitch and the pulsar mass. We are then able to estimate the mass of all the observed glitchers that have displayed at least two large events. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the superfluid properties of dense hadronic matter in neutron star interiors.

  13. Utilization of negative beat-frequencies for maximizing the update-rate of OFDR

    Science.gov (United States)

    Gabai, Haniel; Botsev, Yakov; Hahami, Meir; Eyal, Avishay

    2015-07-01

    In traditional OFDR systems, the backscattered profile of a sensing fiber is inefficiently duplicated to the negative band of spectrum. In this work, we present a new OFDR design and algorithm that remove this redundancy and make use of negative beat frequencies. In contrary to conventional OFDR designs, it facilitates efficient use of the available system bandwidth and enables distributed sensing with the maximum allowable interrogation update-rate for a given fiber length. To enable the reconstruction of negative beat frequencies an I/Q type receiver is used. In this receiver, both the in-phase (I) and quadrature (Q) components of the backscatter field are detected. Following detection, both components are digitally combined to produce a complex backscatter signal. Accordingly, due to its asymmetric nature, the produced spectrum will not be corrupted by the appearance of negative beat-frequencies. Here, via a comprehensive computer simulation, we show that in contrast to conventional OFDR systems, I/Q OFDR can be operated at maximum interrogation update-rate for a given fiber length. In addition, we experimentally demonstrate, for the first time, the ability of I/Q OFDR to utilize negative beat-frequencies for long-range distributed sensing.

  14. Design and Manufacture an Ultrasonic Dispersion System with Automatic Frequency Adjusting Property

    Directory of Open Access Journals (Sweden)

    Herlina ABDUL RAHIM

    2011-03-01

    Full Text Available This paper a novel ultrasonic dispersion system for the cleaning application or dispersing of particles which are mixed in liquid has been proposed. The frequency band of designed system is 30 kHz so that the frequency of ultrasonic wave sweeps from 30 kHz to 60 kHz with 100 Hz steps. One of the superiority of manufactured system in compare with the other similar systems which are available in markets is that this system can transfer the maximum and optimum energy of ultrasonic wave inside the liquid tank with the high efficiency in the whole of the usage time of the system. The used ultrasonic transducers in this system as the generator of ultrasonic wave is the type of air coupled ceramic ultrasonic piezoelectric with the nominal maximum power 50 Watt. The frequency characteristic of applied piezoelectric is that it produces the maximum amplitude of ultrasonic wave on the resonance frequency, so this system is designed to work on resonance frequency of piezoelectric, continuously. This is done by the use of control system which is consisted of two major parts, sensing part and controlling part. The manufactured ultrasonic dispersion system is consisted of 9 piezoelectrics so that it can produce 450 watt ultrasonic energy, totally. The main purpose of this project is to produce a safety system especially for fatigue car driver so as to prevent from accidents. The statistic on road fatality shows that human error constitute of 64.84 % road accidents fatality and 17.4 % due to technical factors. These systems encompassed the approach of hand pressure applied on the steering wheel. The steering will be installed with pressure sensors. At the same time these sensors can be used to measure gripping force while driving.

  15. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  16. Frequency and temperature dependent mobility of a charged carrier and randomly interrupted strand

    International Nuclear Information System (INIS)

    Kumar, N.; Jayannavar, A.M.

    1981-05-01

    Randomly interrupted strand model of a one-dimensional conductor is considered. Exact analytical expression is obtained for the temperature dependent as mobility for a finite segment drawn at random, taking into account the reflecting barriers at the two open ends. The real part of mobility shows a broad resonance as a function of both frequency and tempeature, and vanishes quadratically in the dc limit. The frequency (temperature) maximum shifts to higher values for higher temperatures (frequencies). (author)

  17. The MCE (Maximum Credible Earthquake) - an approach to reduction of seismic risk

    International Nuclear Information System (INIS)

    Asmis, G.J.K.; Atchison, R.J.

    1979-01-01

    It is the responsibility of the Regulatory Body (in Canada, the AECB) to ensure that radiological risks resulting from the effects of earthquakes on nuclear facilities, do not exceed acceptable levels. In simplified numerical terms this means that the frequency of an unacceptable radiation dose must be kept below 10 -6 per annum. Unfortunately, seismic events fall into the class of external events which are not well defined at these low frequency levels. Thus, design earthquakes have been chosen, at the 10 -3 - 10 -4 frequency level, a level commensurate with the limits of statistical data. There exists, therefore, a need to define an additional level of earthquake. A seismic design explicitly and implicitly recognizes three levels of earthquake loading; one comfortably below yield, one at or about yield, and one at ultimate. The ultimate level earthquake, contrary to the first two, has been implicitly addressed by conscientious designers by choosing systems, materials and details compatible with postulated dynamic forces. It is the purpose of this paper to discuss the regulatory specifications required to quantify this third level, or Maximum Credible Earthquake (MCE). (orig.)

  18. Enhanced magnetic domain relaxation frequency and low power losses in Zn{sup 2+} substituted manganese ferrites potential for high frequency applications

    Energy Technology Data Exchange (ETDEWEB)

    Praveena, K., E-mail: praveenaou@gmail.com [Department of Physics, National Taiwan Normal University, Taipei, 11677, Taiwan (China); Chen, Hsiao-Wen [Department of Physics, National Taiwan Normal University, Taipei, 11677, Taiwan (China); Liu, Hsiang-Lin, E-mail: hliu@ntnu.edu.tw [Department of Physics, National Taiwan Normal University, Taipei, 11677, Taiwan (China); Sadhana, K., E-mail: sadhana@osmania.ac.in [Department of Physics, Osmania University, Saifabad, Hyderabad, 500004 (India); Murthy, S.R. [Department of Physics, Osmania University, Hyderabad, 500007 (India)

    2016-12-15

    Nowadays electronic industries prerequisites magnetic materials, i.e., iron rich materials and their magnetic alloys. However, with the advent of high frequency applications, the standard techniques of reducing eddy current losses, using iron cores, were no longer efficient or cost effective. Current market trends of the switched mode power supplies industries required even low energy losses in power conversion with maintenance of adequate initial permeability. From the above point of view, in the present study we aimed at the production of Manganese–Zinc ferrites prepared via solution combustion method using mixture of fuels and achieved low loss, high saturation magnetization, high permeability, and high magnetic domain relaxation frequency. The as-synthesized Zn{sup 2+} substituted MnFe{sub 2}O{sub 4} were characterized by X-ray diffractometer (XRD) and transmission electron microscopy (TEM). The fractions of Mn{sup 2+}, Zn{sup 2+} and Fe{sup 2+} cations occupying tetrahedral sites along with Fe occupying octahedral sites within the unit cell of all ferrite samples were estimated by Raman scattering spectroscopy. The magnetic domain relaxation was investigated by inductance spectroscopy (IS) and the observed magnetic domain relaxation frequency (f{sub r}) was increased with the increase in grain size. The real and imaginary part of permeability (μ′ and μ″) increased with frequency and showed a maximum above 100 MHz. This can be explained on the basis of spin rotation and domain wall motion. The saturation magnetization (M{sub s}), remnant magnetization (M{sub r}) and magneton number (µ{sub B}) decreased gradually with increasing Zn{sup 2+} concentration. The decrease in the saturation magnetization was discussed with Yafet–Kittel (Y–K) model. The Zn{sup 2+} concentration increases the relative number of ferric ions on the A sites, reduces the A–B interactions. The frequency dependent total power losses decreased as the zinc concentration increased

  19. Allowable shipment frequencies for the transport of toxic gases near nuclear power plants

    International Nuclear Information System (INIS)

    Bennett, D.E.; Heath, D.C.

    1982-10-01

    One part of the safety analysis of offsite hazards for a nuclear power plant is consideration of accidents which could release toxic gases or vapors and thus jeopardize plant safety through incapacitation of the control room operators. The purpose of this work is to provide generic, bounding estimates of the maximum allowable shipping frequencies for the transport of a chemical near the plant, such that the regulatory criteria for the protection of the operators are met. A probabilistic methodology was developed and then applied to the truck and rail transport of an example chemical, chlorine. The current regulatory criteria are discussed in detail. For this study, a maximum allowable probability of occurrence of operator incapacitation of 10 - 5 per year was used in the example calculation for each mode of transport. Comprehensive tables of conditional probabilities are presented. Maximum allowable ahipping frequencies are then derived. These frequencies could be used as part of a generic, bounding criterion for the screening of toxic hazards safety analyses. Unless a transport survey assures shipping frequencies within 8 km of the plant on the order of or lower than 4/week for rail or 35/week for truck, the contol room should be isolatable and the shipping frequency then determines the degree of isolation needed. The need for isolation implies the need for toxic chemical detection at the air intake.For a self-detection case in which the smell threshold is significantly lower than the incapacitation threshold and the control room is isolatable, the corresponding trequencies are 11/week for rail or 115/week for truck. Self-contained breathing equipment is assumed to be used after 5 minutes

  20. Digital frequency offset-locked He–Ne laser system with high beat frequency stability, narrow optical linewidth and optical fibre output

    Science.gov (United States)

    Sternkopf, Christian; Manske, Eberhard

    2018-06-01

    We report on the enhancement of a previously-presented heterodyne laser source on the basis of two phase-locked loop (PLL) frequency coupled internal-mirror He–Ne lasers. Our new system consists of two digitally controlled He–Ne lasers with slightly different wavelengths, and offers high-frequency stability and very narrow optical linewidth. The digitally controlled system has been realized by using a FPGA controller and transconductance amplifiers. The light of both lasers was coupled into separate fibres for heterodyne interferometer applications. To enhance the laser performance we observed the sensitivity of both laser tubes to electromagnetic noise from various laser power supplies and frequency control systems. Furthermore, we describe how the linewidth of a frequency-controlled He–Ne laser can be reduced during precise frequency stabilisation. The digitally controlled laser source reaches a standard beat frequency deviation of less than 20 Hz (with 1 s gate time) and a spectral full width at half maximum (FWHM) of the beat signal less than 3 kHz. The laser source has enough optical output power to serve a fibre-coupled multi axis heterodyne interferometer. The system can be adjusted to output beat frequencies in the range of 0.1 MHz–20 MHz.

  1. Tank Farm WM-182 and WM-183 Heel Slurry Samples PSD Results

    International Nuclear Information System (INIS)

    Batcheller, T.A.; Huestis, G.M.

    2000-01-01

    Particle size distribution (PSD) analysis of INTEC Tank Farm WM-182 and WM-183 heel slurry samples were performed using a modified Horiba LA-300 PSD analyzer at the RAL facility. There were two types of testing performed: typical PSD analysis, and setting rate testing. Although the heel slurry samples were obtained from two separate vessels, the particle size distribution results were quite similar. The slurry solids were from approximately a minimum particle size of 0.5 mm to a maximum of 230 mm with about 90% of the material between 2-to-133 mm, and the cumulative 50% value at approximately 20 mm. This testing also revealed that high frequency sonication with an ultrasonic element may break-up larger particles in the WM-182 and WM-183 tank from heel slurries. This finding represents useful information regarding ultimate tank heel waste processing. Settling rate testing results were also fairly consistent with material from both vessels in that it appears that most of the mass of solids settle to an agglomerated, yet easily redispersed layer at the bottom. A dispersed and suspended material remained in the ''clear'' layer above the settled layer after about one-half an hour of settling time. This material had a statistical mode of approximately 5 mm and a maximum particle size of 30 mm

  2. Frequency of Cannabis Use and Medical Cannabis Use Among Persons Living With HIV in the United States: Findings From a Nationally Representative Sample.

    Science.gov (United States)

    Pacek, Lauren R; Towe, Sheri L; Hobkirk, Andrea L; Nash, Denis; Goodwin, Renee D

    2018-04-01

    Little is known about cannabis use frequency, medical cannabis use, or correlates of use among persons living with HIV (PLWH) in United States nationally representative samples. Data came from 626 PLWH from the 2005-2015 National Survey on Drug Use and Health. Logistic regression identified characteristics associated with frequency of cannabis use. Chi-squares identified characteristics associated with medial cannabis use. Non-daily and daily cannabis use was reported by 26.9% and 8.0%. Greater perceived risk of cannabis use was negatively associated with daily and non-daily use. Younger age, substance use, and binge drinking were positively associated with non-daily cannabis use. Smoking and depression were associated with non-daily and daily use. One-quarter reported medical cannabis use. Medical users were more likely to be White, married, and nondrinkers. Cannabis use was common among PLWH. Findings help to differentiate between cannabis users based on frequency of use and medical versus recreational use.

  3. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    Science.gov (United States)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  4. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  5. Frequency and Antibiogram of Vancomycin Resistant Enterococcus in a Tertiary Care Hospital

    International Nuclear Information System (INIS)

    Babar, N.; Usman, J.; Munir, T.; Gill, M. M.; Anjum, R.; Gilani, M.; Latif, M.

    2014-01-01

    Objective: To determine the frequency of Vancomycin Resistant Enterococcus (VRE) in a tertiary care hospital of Rawalpindi, Pakistan. Study Design: Observational, cross-sectional study. Place and Duration of Study: Department of Microbiology, Army Medical College, Rawalpindi, from May 2011 to May 2012. Methodology: Vancomycin resistant Enterococcus isolated from the clinical specimens including blood, pus, double lumen tip, ascitic fluid, tracheal aspirate, non-directed bronchial lavage (NBL), cerebrospinal fluid (CSF), high vaginal swab (HVS) and catheter tips were cultured on blood agar and MacConkey agar, while the urine samples were grown on cystine lactose electrolyte deficient agar. Later the antimicrobial susceptibility testing of the isolates was carried out using the modified Kirby-Bauer disc diffusion method on Mueller Hinton agar. Results: A total of 190 enterococci were isolated. Of these, 22 (11.57%) were found to be resistant to vancomycin. The antimicrobial sensitivity pattern revealed maximum resistance against ampicillin (86.36%) followed by erythromycin (81.81%) and gentamicin (68.18%) while all the isolates were 100% susceptible to chloramphenicol and linezolid. Conclusion: The frequency of VRE was 11.57% with the highest susceptibility to linezolid and chloramphenicol. (author)

  6. Energy drink use frequency among an international sample of people who use drugs: Associations with other substance use and well-being

    OpenAIRE

    Peacock, Amy; Bruno, Raimondo; Ferris, Jason; Winstock, Adam

    2017-01-01

    Objective The study aims were to identify: i.) energy drink (ED), caffeine tablet, and caffeine intranasal spray use amongst a sample who report drug use, and ii.) the association between ED use frequency and demographic profile, drug use, hazardous drinking, and wellbeing. Method Participants (n = 74,864) who reported drug use completed the online 2014 Global Drug Survey. They provided data on demographics, ED use, and alcohol and drug use, completed the Alcohol Use Disorders Identification ...

  7. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  8. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  10. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  11. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  12. Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals

    Science.gov (United States)

    Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei

    2018-01-01

    Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.

  13. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  14. Sequencing analysis of mutations induced by N-ethyl-N-nitrosourea at different sampling times in mouse bone marrow.

    Science.gov (United States)

    Wang, Jianyong; Chen, Tao

    2010-03-01

    In our previous study (Wang et al., 2004, Toxicol. Sci. 82: 124-128), we observed that the cII gene mutant frequency (MF) in the bone marrow of Big Blue mice showed significant increase as early as day 1, reached the maximum at day 3 and then decreased to a plateau by day 15 after a single dose of carcinogen N-ethyl-N-nitrosourea (ENU) treatment, which is different from the longer mutation manifestation time and the constancy of MFs after reaching their maximum in some other tissues. To determine the mechanism underlying the quick increase in MF and the peak formation in the mutant manifestation, we examined the mutation frequencies and spectra of the ENU-induced mutants collected from different sampling times in this study. The cII mutants from days 1, 3 and 120 after ENU treatment were randomly selected from different animals. The mutation frequencies were 33, 217, 305 and 144 x 10(-6) for control, days 1, 3, and 120, respectively. The mutation spectra at days 1 and 3 were significantly different from that at day 120. Considering that stem cells are responsible for the ultimate MF plateau (day 120) and transit cells are accountable for the earlier MF induction (days 1 or 3) in mouse bone marrow, we conclude that transit cells are much more sensitive to mutation induction than stem cells in mouse bone marrow, which resulted in the specific mutation manifestation induced by ENU.

  15. Dopant density from maximum-minimum capacitance ratio of implanted MOS structures

    International Nuclear Information System (INIS)

    Brews, J.R.

    1982-01-01

    For uniformly doped structures, the ratio of the maximum to the minimum high frequency capacitance determines the dopant ion density per unit volume. Here it is shown that for implanted structures this 'max-min' dopant density estimate depends upon the dose and depth of the implant through the first moment of the depleted portion of the implant. A a result, the 'max-min' estimate of dopant ion density reflects neither the surface dopant density nor the average of the dopant density over the depletion layer. In particular, it is not clear how this dopant ion density estimate is related to the flatband capacitance. (author)

  16. Prediction of scour depth in gravel bed rivers using radio frequency IDs : application to the Skagit River.

    Science.gov (United States)

    2013-10-01

    The overarching goal of the proposed research was to develop, test and verify a robust system based on the Low Frequency (134.2 : kHz), passive Radio Frequency Identification (RFID) technology to be ultimately used for determining the maximum scour d...

  17. Frequency stabilized HeNe gas laser with 3.5 mW from a single mode

    NARCIS (Netherlands)

    Ellis, J.D.; Voigt, D.; Spronck, J.W.; Verlaan, A.L.; Munnig Schmidt, R.H.

    2012-01-01

    This paper describes an optical frequency stabilization technique using a three-mode Helium Neon laser at 632.8 nm. Using this configuration, a maximum frequency stability relative to an iodine stabilized laser of 6×10 -12 (71 s integration time) was achieved. Two long term measurements of 62 h and

  18. Improved capacitance sensor with variable operating frequency for scanning capacitance microscopy

    International Nuclear Information System (INIS)

    Kwon, Joonhyung; Kim, Joonhui; Jeong, Jong-Hwa; Lee, Euy-Kyu; Seok Kim, Yong; Kang, Chi Jung; Park, Sang-il

    2005-01-01

    Scanning capacitance microscopy (SCM) has been gaining attention for its capability to measure local electrical properties in doping profile, oxide thickness, trapped charges and charge dynamics. In many cases, stray capacitance produced by different samples and measurement conditions affects the resonance frequency of a capacitance sensor. The applications of conventional SCM are critically limited by the fixed operating frequency and lack of tunability in its SCM sensor. In order to widen SCM application to various samples, we have developed a novel SCM sensor with variable operating frequency. By performing variable frequency sweep over the band of 160 MHz, the SCM sensor is tuned to select the best and optimized resonance frequency and quality factor for each sample measurement. The fundamental advantage of the new variable frequency SCM sensor was demonstrated in the SCM imaging of silicon oxide nano-crystals. Typical sensitivity of the variable frequency SCM sensor was found to be 10 -19 F/V

  19. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  20. Low-frequency and wideband vibration energy harvester with flexible frame and interdigital structure

    Energy Technology Data Exchange (ETDEWEB)

    Li, Pengwei, E-mail: lipengwei@tyut.edu.cn; Wang, Yanfen; Luo, Cuixian; Li, Gang; Hu, Jie; Zhang, Wendong [MicroNano System Research Center of College of Information Engineering and Key Lab of Advanced Transducers and Intelligent Control System of the Ministry of Education, Taiyuan University of Technology, Taiyuan 030024, Shanxi (China); Liu, Ying [MicroNano System Research Center of College of Information Engineering and Key Lab of Advanced Transducers and Intelligent Control System of the Ministry of Education, Taiyuan University of Technology, Taiyuan 030024, Shanxi (China); Baicheng Ordnance Test Center of China, Baicheng 137000, Jilin (China); Liu, Wei [Baicheng Ordnance Test Center of China, Baicheng 137000, Jilin (China)

    2015-04-15

    As an alternative to traditional cantilever beam structures and their evolutions, a flexible beam based, interdigital structure, vibration energy harvester has been presented and investigated. The proposed interdigital-shaped oscillator consists of a rectangular flexible frame and series of cantilever beams interdigitally bonded to it. In order to achieve low frequency and wide-bandwidth harvesting, Young’s modulus of materials, frame size and the amount of the cantilevers have been studied systematically. The measured frequency responses of the designed device (PDMS frame, quintuple piezoelectric cantilever beams) show a 460% increase in bandwidth below 80Hz. When excited at an acceleration of 1.0 g, the energy harvester achieves to a maximum open-circuit voltage of 65V, and the maximum output power 4.5 mW.

  1. Wireless power transmission to an electromechanical receiver using low-frequency magnetic fields

    International Nuclear Information System (INIS)

    Challa, Vinod R; Arnold, David P; Mur-Miranda, Jose Oscar

    2012-01-01

    A near-field, electrodynamically coupled wireless power transmission system is presented that delivers electrical power from a transmitter coil to a compact electromechanical receiver. The system integrates electromechanical energy conversion and mechanical resonance to deliver power over a range of distances using low-amplitude, low-frequency magnetic fields. Two different receiver orientations are investigated that rely on either the force or the torque induced on the receiver magnet at separation distances ranging from 2.2 to 10.2 cm. Theoretical models for each mode compare the predicted performance with the experimental results. For a 7.1 mA pk sinusoidal current supplied to a transmitter coil with a 100 cm diameter, the torque mode receiver orientation has a maximum power transfer of 150 μW (efficiency of 12%) at 2.2 cm at its resonance frequency of 38.4 Hz. For the same input current to the transmitter, the force mode receiver orientation has a maximum power transfer of 37 μW (efficiency of 4.1%) at 3.1 cm at its resonance frequency of 38.9 Hz. (paper)

  2. Mental health problems are associated with low-frequency fluctuations in reaction time in a large general population sample. The TRAILS study.

    Science.gov (United States)

    Bastiaansen, J A; van Roon, A M; Buitelaar, J K; Oldehinkel, A J

    2015-02-01

    Increased intra-subject reaction time variability (RT-ISV) as coarsely measured by the standard deviation (RT-SD) has been associated with many forms of psychopathology. Low-frequency RT fluctuations, which have been associated with intrinsic brain rhythms occurring approximately every 15-40s, have been shown to add unique information for ADHD. In this study, we investigated whether these fluctuations also relate to attentional problems in the general population, and contribute to the two major domains of psychopathology: externalizing and internalizing problems. RT was monitored throughout a self-paced sustained attention task (duration: 9.1 ± 1.2 min) in a Dutch population cohort of young adults (n=1455, mean age: 19.0 ± 0.6 years, 55.1% girls). To characterize temporal fluctuations in RT, we performed direct Fourier Transform on externally validated frequency bands based on frequency ranges of neuronal oscillations: Slow-5 (0.010-0.027 Hz), Slow-4 (0.027-0.073 Hz), and three additional higher frequency bands. Relative magnitude of Slow-4 fluctuations was the primary predictor in regression models for attentional, internalizing and externalizing problems (measured by the Adult Self-Report questionnaire). Additionally, stepwise regression models were created to investigate (a) whether Slow-4 significantly improved the prediction of problem behaviors beyond the RT-SD and (b) whether the other frequency bands provided important additional information. The magnitude of Slow-4 fluctuations significantly predicted attentional and externalizing problems and even improved model fit after modeling RT-SD first (R(2) change=0.6%, Pfrequency bands provided additional information. Low-frequency RT fluctuations have added predictive value for attentional and externalizing, but not internalizing problems beyond global differences in variability. This study extends previous findings in clinical samples of children with ADHD to adolescents from the general population and

  3. Maternal obesity alters immune cell frequencies and responses in umbilical cord blood samples.

    Science.gov (United States)

    Wilson, Randall M; Marshall, Nicole E; Jeske, Daniel R; Purnell, Jonathan Q; Thornburg, Kent; Messaoudi, Ilhem

    2015-06-01

    Maternal obesity is one of the several key factors thought to modulate neonatal immune system development. Data from murine studies demonstrate worse outcomes in models of infection, autoimmunity, and allergic sensitization in offspring of obese dams. In humans, children born to obese mothers are at increased risk for asthma. These findings suggest a dysregulation of immune function in the children of obese mothers; however, the underlying mechanisms remain poorly understood. The aim of this study was to examine the relationship between maternal body weight and the human neonatal immune system. Umbilical cord blood samples were collected from infants born to lean, overweight, and obese mothers. Frequency and function of major innate and adaptive immune cell populations were quantified using flow cytometry and multiplex analysis of circulating factors. Compared to babies born to lean mothers, babies of obese mothers had fewer eosinophils and CD4 T helper cells, reduced monocyte and dendritic cell responses to Toll-like receptor ligands, and increased plasma levels of IFN-α2 and IL-6 in cord blood. These results support the hypothesis that maternal obesity influences programming of the neonatal immune system, providing a potential link to increased incidence of chronic inflammatory diseases such as asthma and cardiovascular disease in the offspring. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  5. Climate, orography and scale controls on flood frequency in Triveneto (Italy

    Directory of Open Access Journals (Sweden)

    S. Persiano

    2016-05-01

    Full Text Available The growing concern about the possible effects of climate change on flood frequency regime is leading Authorities to review previously proposed reference procedures for design-flood estimation, such as national flood frequency models. Our study focuses on Triveneto, a broad geographical region in North-eastern Italy. A reference procedure for design flood estimation in Triveneto is available from the Italian NCR research project "VA.PI.", which considered Triveneto as a single homogeneous region and developed a regional model using annual maximum series (AMS of peak discharges that were collected up to the 1980s by the former Italian Hydrometeorological Service. We consider a very detailed AMS database that we recently compiled for 76 catchments located in Triveneto. All 76 study catchments are characterized in terms of several geomorphologic and climatic descriptors. The objective of our study is threefold: (1 to inspect climatic and scale controls on flood frequency regime; (2 to verify the possible presence of changes in flood frequency regime by looking at changes in time of regional L-moments of annual maximum floods; (3 to develop an updated reference procedure for design flood estimation in Triveneto by using a focused-pooling approach (i.e. Region of Influence, RoI. Our study leads to the following conclusions: (1 climatic and scale controls on flood frequency regime in Triveneto are similar to the controls that were recently found in Europe; (2 a single year characterized by extreme floods can have a remarkable influence on regional flood frequency models and analyses for detecting possible changes in flood frequency regime; (3 no significant change was detected in the flood frequency regime, yet an update of the existing reference procedure for design flood estimation is highly recommended and we propose the RoI approach for properly representing climate and scale controls on flood frequency in Triveneto, which cannot be regarded

  6. The aggregate site frequency spectrum for comparative population genomic inference.

    Science.gov (United States)

    Xue, Alexander T; Hickerson, Michael J

    2015-12-01

    Understanding how assemblages of species responded to past climate change is a central goal of comparative phylogeography and comparative population genomics, an endeavour that has increasing potential to integrate with community ecology. New sequencing technology now provides the potential to perform complex demographic inference at unprecedented resolution across assemblages of nonmodel species. To this end, we introduce the aggregate site frequency spectrum (aSFS), an expansion of the site frequency spectrum to use single nucleotide polymorphism (SNP) data sets collected from multiple, co-distributed species for assemblage-level demographic inference. We describe how the aSFS is constructed over an arbitrary number of independent population samples and then demonstrate how the aSFS can differentiate various multispecies demographic histories under a wide range of sampling configurations while allowing effective population sizes and expansion magnitudes to vary independently. We subsequently couple the aSFS with a hierarchical approximate Bayesian computation (hABC) framework to estimate degree of temporal synchronicity in expansion times across taxa, including an empirical demonstration with a data set consisting of five populations of the threespine stickleback (Gasterosteus aculeatus). Corroborating what is generally understood about the recent postglacial origins of these populations, the joint aSFS/hABC analysis strongly suggests that the stickleback data are most consistent with synchronous expansion after the Last Glacial Maximum (posterior probability = 0.99). The aSFS will have general application for multilevel statistical frameworks to test models involving assemblages and/or communities, and as large-scale SNP data from nonmodel species become routine, the aSFS expands the potential for powerful next-generation comparative population genomic inference. © 2015 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  7. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  8. Regional flood frequency analysis in the KwaZulu-Natal province, South Africa, using the index-flood method

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas Rødding; Smithers, J.C.; Schulze, R.E.

    2002-01-01

    A regional frequency analysis of annual maximum series (AMS) of flood flows from relatively unregulated rivers in the KwaZulu-Natal province of South Africa has been conducted, including identification of homogeneous regions and suitable regional frequency distributions for the regions. The study...

  9. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    Directory of Open Access Journals (Sweden)

    Petré Frederik

    2004-01-01

    Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.

  10. Utilization of Satellite Data to Identify and Monitor Changes in Frequency of Meteorological Events

    Science.gov (United States)

    Mast, J. C.; Dessler, A. E.

    2017-12-01

    Increases in temperature and climate variability due to human-induced climate change is increasing the frequency and magnitude of extreme heat events (i.e., heatwaves). This will have a detrimental impact on the health of human populations and habitability of certain land locations. Here we seek to utilize satellite data records to identify and monitor extreme heat events. We analyze satellite data sets (MODIS and AIRS land surface temperatures (LST) and water vapor profiles (WV)) due to their global coverage and stable calibration. Heat waves are identified based on the frequency of maximum daily temperatures above a threshold, determined as follows. Land surface temperatures are gridded into uniform latitude/longitude bins. Maximum daily temperatures per bin are determined and probability density functions (PDF) of these maxima are constructed monthly and seasonally. For each bin, a threshold is calculated at the 95th percentile of the PDF of maximum temperatures. Per each bin, an extreme heat event is defined based on the frequency of monthly and seasonal days exceeding the threshold. To account for the decreased ability of the human body to thermoregulate with increasing moisture, and to assess lethality of the heat events, we determine the wet-bulb temperature at the locations of extreme heat events. Preliminary results will be presented.

  11. A Frequency Splitting Method For CFM Imaging

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Jensen, Jørgen Arendt

    2006-01-01

    The performance of conventional CFM imaging will often be degraded due to the relatively low number of pulses (4-10) used for each velocity estimate. To circumvent this problem we propose a new method using frequency splitting (FS). The FS method uses broad band chirps as excitation pulses instead...... of narrow band pulses as in conventional CFM imaging. By appropriate filtration, the returned signals are divided into a number of narrow band signals which are approximately disjoint. After clutter filtering the velocities are found from each frequency band using a conventional autocorrelation estimator......, a 5 MHz linear array transducer was used to scan a vessel situated at 30 mm depth with a maximum flow velocity of 0.1 m/s. The pulse repetition frequency was 1.8 kHz and the angle between the flow and the beam was 60 deg. A 15 mus chirp was used as excitation pulse and 40 independent velocity...

  12. Regional Analysis of Precipitation by Means of Bivariate Distribution Adjusted by Maximum Entropy; Analisis regional de precipitacion con base en una distribucion bivariada ajustada por maxima entropia

    Energy Technology Data Exchange (ETDEWEB)

    Escalante Sandoval, Carlos A.; Dominguez Esquivel, Jose Y. [Universidad Nacional Autonoma de Mexico (Mexico)

    2001-09-01

    The principle of maximum entropy (POME) is used to derive an alternative method of parameter estimation for the bivariate Gumbel distribution. A simple algorithm for this parameter estimation technique is presented. This method is applied to analyze the precipitation in a region of Mexico. Design events are compered with those obtained by the maximum likelihood procedure. According to the results, the proposed technique is a suitable option to be considered when performing frequency analysis of precipitation with small samples. [Spanish] El principio de maxima entropia, conocido como POME, es utilizado para derivar un procedimiento alternativo de estimacion de parametros de la distribucion bivariada de valores extremos con marginales Gumbel. El modelo se aplica al analisis de la precipitacion maxima en 24 horas en una region de Mexico y los eventos de diseno obtenidos son comparados con los proporcionados por la tecnica de maxima verosimilitud. De acuerdo con los resultados obtenidos, se concluye que la tecnica propuesta representa una buena opcion, sobre todo para el caso de muestras pequenas.

  13. A Maximum Entropy Approach to Loss Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marco Bee

    2013-03-01

    Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

  14. Adaptive double-integral-sliding-mode-maximum-power-point tracker for a photovoltaic system

    Directory of Open Access Journals (Sweden)

    Bidyadhar Subudhi

    2015-10-01

    Full Text Available This study proposed an adaptive double-integral-sliding-mode-controller-maximum-power-point tracker (DISMC-MPPT for maximum-power-point (MPP tracking of a photovoltaic (PV system. The objective of this study is to design a DISMC-MPPT with a new adaptive double-integral-sliding surface in order that MPP tracking is achieved with reduced chattering and steady-state error in the output voltage or current. The proposed adaptive DISMC-MPPT possesses a very simple and efficient PWM-based control structure that keeps switching frequency constant. The controller is designed considering the reaching and stability conditions to provide robustness and stability. The performance of the proposed adaptive DISMC-MPPT is verified through both MATLAB/Simulink simulation and experiment using a 0.2 kW prototype PV system. From the obtained results, it is found out that this DISMC-MPPT is found to be more efficient compared with that of Tan's and Jiao's DISMC-MPPTs.

  15. Adjustment and Assessment of the Measurements of Low and High Sampling Frequencies of GPS Real-Time Monitoring of Structural Movement

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2016-11-01

    Full Text Available Global Positioning System (GPS structural health monitoring data collection is one of the important systems in structure movement monitoring. However, GPS measurement error and noise limit the application of such systems. Many attempts have been made to adjust GPS measurements and eliminate their errors. Comparing common nonlinear methods used in the adjustment of GPS positioning for the monitoring of structures is the main objective of this study. Nonlinear Adaptive-Recursive Least Square (RLS, extended Kalman filter (EKF, and wavelet principal component analysis (WPCA are presented and applied to improve the quality of GPS time series observations. Two real monitoring observation systems for the Mansoura railway and long-span Yonghe bridges are utilized to examine suitable methods used to assess bridge behavior under different load conditions. From the analysis of the results, it is concluded that the wavelet principal component is the best method to smooth low and high GPS sampling frequency observations. The evaluation of the bridges reveals the ability of the GPS systems to detect the behavior and damage of structures in both the time and frequency domains.

  16. Effects of very low frequency electromagnetic method (VLFEM) and ...

    African Journals Online (AJOL)

    The study examined the impact of livestock dung on ground water status in the study area. To achieve this, a very low frequency EM survey was conducted; the aim and objective was to detect fractures in the subsurface. VLF data were acquired at 5m intervals along two profiles, with maximum length of 60m in the ...

  17. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  18. The estimation of probable maximum precipitation: the case of Catalonia.

    Science.gov (United States)

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  19. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    International Nuclear Information System (INIS)

    Toogoshi, M; Kano, S S; Zempo, Y

    2015-01-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper. (paper)

  20. Solar maximum mission: Ground support programs at the Harvard Radio Astronomy Station

    Science.gov (United States)

    Maxwell, A.

    1983-01-01

    Observations of the spectral characteristics of solar radio bursts were made with new dynamic spectrum analyzers of high sensitivity and high reliability, over the frequency range 25-580 MHz. The observations also covered the maximum period of the current solar cycle and the period of international cooperative programs designated as the Solar Maximum Year. Radio data on shock waves generated by solar flares were combined with optical data on coronal transients, taken with equipment on the SMM and other satellites, and then incorporated into computer models for the outward passage of fast-mode MHD shocks through the solar corona. The MHD models are non-linear, time-dependent and for the most recent models, quasi-three-dimensional. They examine the global response of the corona for different types of input pulses (thermal, magnetic, etc.) and for different magnetic topologies (for example, open and closed fields). Data on coronal shocks and high-velocity material ejected from solar flares have been interpreted in terms of a model consisting of three main velocity regimes.

  1. Extragalactic Peaked-spectrum Radio Sources at Low Frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Callingham, J. R.; Gaensler, B. M.; Sadler, E. M.; Lenc, E. [Sydney Institute for Astronomy (SIfA), School of Physics, The University of Sydney, NSW 2006 (Australia); Ekers, R. D.; Bell, M. E. [CSIRO Astronomy and Space Science (CASS), Marsfield, NSW 2122 (Australia); Line, J. L. B.; Hancock, P. J.; Kapińska, A. D.; McKinley, B.; Procopio, P. [ARC Centre of Excellence for All-Sky Astrophysics (CAASTRO) (Australia); Hurley-Walker, N.; Tingay, S. J.; Franzen, T. M. O.; Morgan, J. [International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley, WA 6102 (Australia); Dwarakanath, K. S. [Raman Research Institute (RRI), Bangalore 560080 (India); For, B.-Q. [International Centre for Radio Astronomy Research (ICRAR), The University of Western Australia, Crawley, WA 6009 (Australia); Hindson, L.; Johnston-Hollitt, M. [School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6140 (New Zealand); Offringa, A. R., E-mail: joseph.callingham@sydney.edu.au [Netherlands Institute for Radio Astronomy (ASTRON), Dwingeloo (Netherlands); and others

    2017-02-20

    We present a sample of 1483 sources that display spectral peaks between 72 MHz and 1.4 GHz, selected from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey. The GLEAM survey is the widest fractional bandwidth all-sky survey to date, ideal for identifying peaked-spectrum sources at low radio frequencies. Our peaked-spectrum sources are the low-frequency analogs of gigahertz-peaked spectrum (GPS) and compact-steep spectrum (CSS) sources, which have been hypothesized to be the precursors to massive radio galaxies. Our sample more than doubles the number of known peaked-spectrum candidates, and 95% of our sample have a newly characterized spectral peak. We highlight that some GPS sources peaking above 5 GHz have had multiple epochs of nuclear activity, and we demonstrate the possibility of identifying high-redshift ( z > 2) galaxies via steep optically thin spectral indices and low observed peak frequencies. The distribution of the optically thick spectral indices of our sample is consistent with past GPS/CSS samples but with a large dispersion, suggesting that the spectral peak is a product of an inhomogeneous environment that is individualistic. We find no dependence of observed peak frequency with redshift, consistent with the peaked-spectrum sample comprising both local CSS sources and high-redshift GPS sources. The 5 GHz luminosity distribution lacks the brightest GPS and CSS sources of previous samples, implying that a convolution of source evolution and redshift influences the type of peaked-spectrum sources identified below 1 GHz. Finally, we discuss sources with optically thick spectral indices that exceed the synchrotron self-absorption limit.

  2. High frequency energy measurements

    International Nuclear Information System (INIS)

    Stotlar, S.C.

    1981-01-01

    High-frequency (> 100 MHz) energy measurements present special problems to the experimenter. Environment or available electronics often limit the applicability of a given detector type. The physical properties of many detectors are frequency dependent and in some cases, the physical effect employed can be frequency dependent. State-of-the-art measurements generally involve a detection scheme in association with high-speed electronics and a method of data recording. Events can be single or repetitive shot requiring real time, sampling, or digitizing data recording. Potential modification of the pulse by the detector and the associated electronics should not be overlooked. This presentation will review typical applications, methods of choosing a detector, and high-speed detectors. Special considerations and limitations of some applications and devices will be described

  3. Testing for Granger causality in large mixed-frequency VARs

    NARCIS (Netherlands)

    Götz, T.B.; Hecq, A.W.

    2014-01-01

    In this paper we analyze Granger causality testing in a mixed-frequency VAR, originally proposed by Ghysels (2012), where the difference in sampling frequencies of the variables is large. In particular, we investigate whether past information on a low-frequency variable help in forecasting a

  4. Gravitational dynamos and the low-frequency geomagnetic secular variation.

    Science.gov (United States)

    Olson, P

    2007-12-18

    Self-sustaining numerical dynamos are used to infer the sources of low-frequency secular variation of the geomagnetic field. Gravitational dynamo models powered by compositional convection in an electrically conducting, rotating fluid shell exhibit several regimes of magnetic field behavior with an increasing Rayleigh number of the convection, including nearly steady dipoles, chaotic nonreversing dipoles, and chaotic reversing dipoles. The time average dipole strength and dipolarity of the magnetic field decrease, whereas the dipole variability, average dipole tilt angle, and frequency of polarity reversals increase with Rayleigh number. Chaotic gravitational dynamos have large-amplitude dipole secular variation with maximum power at frequencies corresponding to a few cycles per million years on Earth. Their external magnetic field structure, dipole statistics, low-frequency power spectra, and polarity reversal frequency are comparable to the geomagnetic field. The magnetic variability is driven by the Lorentz force and is characterized by an inverse correlation between dynamo magnetic and kinetic energy fluctuations. A constant energy dissipation theory accounts for this inverse energy correlation, which is shown to produce conditions favorable for dipole drift, polarity reversals, and excursions.

  5. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  6. Nonmonotonic low frequency losses in HTSCs

    International Nuclear Information System (INIS)

    Castro, H; Gerber, A; Milner, A

    2007-01-01

    A calorimetric technique has been used in order to study ac-field dissipation in ceramic BSCCO samples at low frequencies between 0.05 and 250 Hz, at temperatures from 65 to 90 K. In contrast to previous studies, where ac losses have been reported with a linear dependence on magnetic field frequency, we find a nonmonotonic function presenting various maxima. Frequencies corresponding to local maxima of dissipation depend on the temperature and the amplitude of the ac magnetic field. Flux creep is argued to be responsible for this behaviour. A simple model connecting the characteristic vortex relaxation times (flux creep) and the location of dissipation maxima versus frequency is proposed

  7. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  8. Study and realisation of a high frequency analyzer; Etude et realisation d'un analyseur de signaux a haute frequence

    Energy Technology Data Exchange (ETDEWEB)

    Bourbigot, J [Commissariat a l' Energie Atomique, Fontenay aux Roses (France). Centre d' Etudes Nucleaires

    1966-07-01

    This device is designed for the amplitude and frequency analysis of electric or electromagnetic signals in the frequency range of 0 to 55 MHz. The frequency spectrum of a preset bandwidth is displayed on the screen of an oscilloscope. Conceived to analyse the electromagnetic oscillations that can be generated in a plasma, its main characteristics are the following: extended bandwidth of analysed frequencies, on both sides of the ion cyclotron frequency in a magnetic field up to 20 kGs; linear amplitude and frequency response; possibility of analysing a narrow band; high sensitivity; analysis repetition rate of 25 per second. The different parts of the analyzer are described after a discussion of the choice of the techniques used in their design. In addition to its present use, the device can be applied to perform all the functions of a commercial spectral analyzer. (author) [French] Cet appareil est destine a l'analyse en frequence et amplitude de signaux electriques ou electromagnetiques dans une gamme de frequences de 0 a 55 MHz. Couple a un oscillographe, il permet de faire apparaitre sur l'ecran, le spectre de frequences dans une gamme choisie. Etudie dans le but d'analyser les oscillations electromagnetiques pouvant apparaitre dans un plasma, ses principales caracteristiques sont les suivantes: une bande etendue de frequences analysees, de part et d'autre de la frequence cyclotronique des ions dans un champ magnetique pouvant atteindre 20 kGs (valeur maximum 55 MHz); une reponse lineaire en amplitude et en frequence; la possibilite d'analyser, une bande restreinte de frequences; une grande sensibilite La frequence d'analyse est de 25 periodes par seconde. Les diverses parties de l'analyseur sont decrites apres l'expose des motifs ayant guide le choix des solutions adoptees pour sa realisation. Les schemas electriques sont egalement presentes. En dehors du but precis qui a motive la realisation de cet appareil, son usage peut s'etendre a toutes les applications

  9. A low-frequency MEMS piezoelectric energy harvester with a rectangular hole based on bulk PZT film

    Science.gov (United States)

    Tian, Yingwei; Li, Guimiao; Yi, Zhiran; Liu, Jingquan; Yang, Bin

    2018-06-01

    This paper presents a high performance piezoelectric energy harvester (PEH) with a rectangular hole to work at low-frequency. This PEH used thinned bulk PZT film on flexible phosphor bronze, and its structure included piezoelectric layer, supporting layer and proof mass to reduce the resonant frequency of the device. Here, thinned bulk PZT thick film was used as piezoelectric layer due to its high piezoelectric coefficient. A Phosphor bronze was deployed as supporting layer because it had better flexibility compared to silicon and could work under high acceleration ambient with good durability. The maximum open-circuit voltage of the PEH was 15.7 V at low resonant frequency of 34.3 Hz when the input vibration acceleration was 1.5 g (g = 9.81 m/s2). Moreover, the maximum output power, the output power density and the actually current at the same acceleration were 216.66 μW, 1713.58 μW/cm3 and 170 μA, respectively, when the optimal matched resistance of 60 kΩ was connected. The fabricated PEH scavenged the vibration energy of the vacuum compression pump and generated the maximum output voltage of 1.19 V.

  10. Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.

    Science.gov (United States)

    Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N

    2014-01-01

    Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites  =  approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.

  11. Low-frequency and wideband vibration energy harvester with flexible frame and interdigital structure

    Directory of Open Access Journals (Sweden)

    Pengwei Li

    2015-04-01

    Full Text Available As an alternative to traditional cantilever beam structures and their evolutions, a flexible beam based, interdigital structure, vibration energy harvester has been presented and investigated. The proposed interdigital-shaped oscillator consists of a rectangular flexible frame and series of cantilever beams interdigitally bonded to it. In order to achieve low frequency and wide-bandwidth harvesting, Young’s modulus of materials, frame size and the amount of the cantilevers have been studied systematically. The measured frequency responses of the designed device (PDMS frame, quintuple piezoelectric cantilever beams show a 460% increase in bandwidth below 80Hz. When excited at an acceleration of 1.0 g, the energy harvester achieves to a maximum open-circuit voltage of 65V, and the maximum output power 4.5 mW.

  12. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  13. Contribution to the study of maximum levels for liquid radioactive waste disposal into continental and sea water. Treatment of some typical samples

    International Nuclear Information System (INIS)

    Bittel, R.; Mancel, J.

    1968-10-01

    The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [fr

  14. A linear temperature-to-frequency converter

    DEFF Research Database (Denmark)

    Løvborg, Leif

    1965-01-01

    , and that the maximum value of the temperature-frequency coefficient beta in this point is-1/3 alpha, where a is the temperature coefficient of the thermistor at the corresponding temperature. Curves showing the range in which the converter is expected to be linear to within plusmn0.1 degC are given. A laboratory......-built converter having beta = 1.0% degC-1 at 25degC is found to be linear to within plusmn0. 1 degC from 10 to 40degC....

  15. Vibration stress relief treatment in welded samples of ST-3 steel

    International Nuclear Information System (INIS)

    Suarez, J.C.; Fernandez, L.M.; Echevarria, J.F.; Estevez, A.; Perez, A.; Aragon, B.

    1996-01-01

    The presented work is aimed to find the optimal vibration frequency and treatment duration for ST-3 steel welded test pieces. In the experiment transversal stresses were not virtually relieved by the application of vibrations at the three natural frecuencies. With regard to the optimal frequency for our system, the firths natural frequency appears to be most effective one, wherewith a maximum 35-70 % longitudinal stress relief was obtained. The influence of the propagation direction (transversal or longitudinal) of vibrations on stress relief in a welded joint was confirmed

  16. High-power, continuous-wave, solid-state, single-frequency, tunable source for the ultraviolet.

    Science.gov (United States)

    Aadhi, A; Apurv Chaitanya, N; Singh, R P; Samanta, G K

    2014-06-15

    We report the development of a compact, high-power, continuous-wave, single-frequency, ultraviolet (UV) source with extended wavelength tunability. The device is based on single-pass, intracavity, second-harmonic-generation (SHG) of the signal radiation of a singly resonant optical parametric oscillator (SRO) working in the visible and near-IR wavelength range. The SRO is pumped in the green with a 25-mm-long, multigrating, MgO doped periodically poled stoichiometric lithium tantalate (MgO:sPPLT) as nonlinear crystal. Using three grating periods, 8.5, 9.0, and 9.5 μm of the MgO:sPPLT crystal and a single set of cavity mirrors, the SRO can be tuned continuously across 710.7-836.3 nm in the signal and corresponding idler across 2115.8-1462.1 nm with maximum idler power of 1.9 W and maximum out-coupled signal power of 254 mW. By frequency-doubling the intracavity signal with a 5-mm-long bismuth borate (BIBO) crystal, we can further tune the SRO continuously over 62.8 nm across 355.4-418.2 nm in the UV with maximum single-frequency UV power, as much as 770 mW at 398.28 nm in a Gaussian beam profile. The UV radiation has an instantaneous line-width of ∼14.5  MHz and peak-peak frequency stability of 151 MHz over 100 s. More than 95% of the tuning range provides UV power >260  mW. Access to lower UV wavelengths can in principle be realized by operating the SRO in the visible using shorter grating periods.

  17. Critical frequency and maximum electron density of F2 region over four stations in the North American sector

    Czech Academy of Sciences Publication Activity Database

    Ezquer, R. G.; Cabrera, M. A.; López, J. L.; Albornoz, M. R.; Mosert, M.; Marcó, P.; Burešová, Dalia

    2011-01-01

    Roč. 73, č. 4 (2011), s. 420-429 ISSN 1364-6826 Institutional research plan: CEZ:AV0Z30420517 Keywords : Ionosphere * F2 region * Critical frequency * Electron density * Model Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.596, year: 2011 http://www.sciencedirect.com/science/article/pii/S1364682610002786

  18. Controlled X-ray pumping in a wide range of piezo-electric oscillation frequencies

    CERN Document Server

    Navasardyan, M A; Galoyan, K G

    1986-01-01

    In case of Laue diffraction the transmitted X-ray reflection in shown to be effectively controllable in the perfect quartz single crystal when it generates ultrasonic oscillations at the resonance frequency or in its vicinity. The maximum effective amplitude of applied sinusoidal oscillations is equal to 70 V. The pumping degree depends on the voltage amplitude. In this work monochromatic K subalpha sub 1 and K subalpha sub 2 molybdenum lines satisfying the thin crystal condition, mu t<=1, are used (mu is the linear absorption coefficient of the sample for the given wavelength and t is its thickness). The radiation was reflected from different planes such as (1011), (1011), (2022) etc. The complete pumping strongly restricts the structural factor possibilities in estimating the intensity of diffracted X-rays in case of considerable deformations in the bulk of perfect single crystal.

  19. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  20. AIR ATMOSPHERIC-PRESSURE DISCHARGERS FOR OPERATION IN HIGH-FREQUENCY SWITCHING MODE.

    Directory of Open Access Journals (Sweden)

    L.S. Yevdoshenko

    2013-10-01

    Full Text Available Operation of two designs of compact multigap dischargers has been investigated in a high-frequency switching mode. It is experimentally revealed that the rational length of single discharge gaps in the designs is 0.3 mm, and the maximum switching frequency is 27000 discharges per second under long-term stable operation of the dischargers. It is shown that in pulsed corona discharge reactors, the pulse front sharpening results in increasing the operating electric field strength by 1.3 – 1.8 times.

  1. A low-frequency vibration energy harvester based on diamagnetic levitation

    Science.gov (United States)

    Kono, Yuta; Masuda, Arata; Yuan, Fuh-Gwo

    2017-04-01

    This article presents 3-degree-of-freedom theoretical modeling and analysis of a low-frequency vibration energy harvester based on diamagnetic levitation. In recent years, although much attention has been placed on vibration energy harvesting technologies, few harvesters still can operate efficiently at extremely low frequencies in spite of large potential demand in the field of structural health monitoring and wearable applications. As one of the earliest works, Liu, Yuan and Palagummi proposed vertical and horizontal diamagnetic levitation systems as vibration energy harvesters with low resonant frequencies. This study aims to pursue further improvement along this direction, in terms of expanding maximum amplitude and enhancing the flexibility of the operation direction for broader application fields by introducing a new topology of the levitation system.

  2. Using snowflake surface-area-to-volume ratio to model and interpret snowfall triple-frequency radar signatures

    Science.gov (United States)

    Gergely, Mathias; Cooper, Steven J.; Garrett, Timothy J.

    2017-10-01

    The snowflake microstructure determines the microwave scattering properties of individual snowflakes and has a strong impact on snowfall radar signatures. In this study, individual snowflakes are represented by collections of randomly distributed ice spheres where the size and number of the constituent ice spheres are specified by the snowflake mass and surface-area-to-volume ratio (SAV) and the bounding volume of each ice sphere collection is given by the snowflake maximum dimension. Radar backscatter cross sections for the ice sphere collections are calculated at X-, Ku-, Ka-, and W-band frequencies and then used to model triple-frequency radar signatures for exponential snowflake size distributions (SSDs). Additionally, snowflake complexity values obtained from high-resolution multi-view snowflake images are used as an indicator of snowflake SAV to derive snowfall triple-frequency radar signatures. The modeled snowfall triple-frequency radar signatures cover a wide range of triple-frequency signatures that were previously determined from radar reflectivity measurements and illustrate characteristic differences related to snow type, quantified through snowflake SAV, and snowflake size. The results show high sensitivity to snowflake SAV and SSD maximum size but are generally less affected by uncertainties in the parameterization of snowflake mass, indicating the importance of snowflake SAV for the interpretation of snowfall triple-frequency radar signatures.

  3. Sampling system for in vivo ultrasound images

    DEFF Research Database (Denmark)

    Jensen, Jorgen Arendt; Mathorne, Jan

    1991-01-01

    Newly developed algorithms for processing medical ultrasound images use the high frequency sampled transducer signal. This paper describes demands imposed on a sampling system suitable for acquiring such data and gives details about a prototype constructed. It acquires full clinical images...... at a sampling frequency of 20 MHz with a resolution of 12 bits. The prototype can be used for real time image processing. An example of a clinical in vivo image is shown and various aspects of the data acquisition process are discussed....

  4. The isolation of low frequency impact sounds in hotel construction

    Science.gov (United States)

    LoVerde, John J.; Dong, David W.

    2002-11-01

    One of the design challenges in the acoustical design of hotels is reducing low frequency sounds from footfalls occurring on both carpeted and hard-surfaced floors. Research on low frequency impact noise [W. Blazier and R. DuPree, J. Acoust. Soc. Am. 96, 1521-1532 (1994)] resulted in a conclusion that in wood construction low frequency impact sounds were clearly audible and that feasible control methods were not available. The results of numerous FIIC (Field Impact Insulation Class) measurements performed in accordance with ASTM E1007 indicate the lack of correlation between FIIC ratings and the reaction of occupants in the room below. The measurements presented include FIIC ratings and sound pressure level measurements below the ASTM E1007 low frequency limit of 100 Hertz, and reveal that excessive sound levels in the frequency range of 63 to 100 Hertz correlate with occupant complaints. Based upon this history, a tentative criterion for maximum impact sound level in the low frequency range is presented. The results presented of modifying existing constructions to reduce the transmission of impact sounds at low frequencies indicate that there may be practical solutions to this longstanding problem.

  5. Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians

    DEFF Research Database (Denmark)

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.

    2014-01-01

    and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...... for which larger vibrato excursions were favored. The observed interaction between maximum excursion thresholds and vibrato rate may be due to the listeners’ judgments relying on cues provided by the rate of frequency changes (RFC) rather than excursion per se. Further studies are needed to evaluate......Human vibrato is mainly characterized by two parameters: vibrato extent and vibrato rate. These parameters have been found to exhibit an interaction both in physical recordings of singers’ voices and in listener’s preference ratings. This study was concerned with the way in which the maximum...

  6. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  7. 330 mJ single-frequency Ho:YLF slab amplifier

    CSIR Research Space (South Africa)

    Strauss, HJ

    2013-04-01

    Full Text Available We report on a double-pass Ho:YLF slab amplifier which delivered 350 ns long single-frequency pulses of up to 330 mJ at 2064 nm, with a maximum M(sup2) of 1.5 at 50 Hz. It was end pumped with a diode-pumped Tm:YLF slab laser and seeded with up to 50...

  8. Gear-box fault detection using time-frequency based methods

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2015-01-01

    Gear-box fault monitoring and detection is important for optimization of power generation and availability of wind turbines. The current industrial approach is to use condition monitoring systems, which runs in parallel with the wind turbine control system, using expensive additional sensors...... in the gear-box resonance frequency can be detected. Two different time–frequency based approaches are presented in this paper. One is a filter based approach and the other is based on a Karhunen–Loeve basis. Both of them detect the gear-box fault with an acceptable detection delay of maximum 100s, which...... is neglectable compared with the fault developing time....

  9. A frequency-type optically controllable YAG:Nd(3+) laser

    Energy Technology Data Exchange (ETDEWEB)

    Baliasnyi, L.M.; Groznov, M.A.; Gubanov, B.S.; Zoria, A.V.; Myl' nikov, V.S.

    1990-06-01

    The paper demonstrates the feasibility of using MOS-LC modulators based on the s-effect with an internal dividing mirror as the optically controllable mirrors of frequency-type YAG:Nd(3+) lasers. It is shown that the maximum energy of the laser in free-runnig operation of 10 mJ/sq cm is limited by the radiation resistance (not greater than 70 mJ/sq cm) of the orienting fluid, i.e., polyvinyl alcohol. The optical inhomogeneity of the modulator amounts to 20-40 percent, which is connected with the presence of a bonded single-crystal GaAs layer. The working frequency of the laser was about 20 Hz.

  10. Self-reported sleep disturbances due to railway noise: exposure-response relationships for nighttime equivalent and maximum noise levels.

    Science.gov (United States)

    Aasvang, Gunn Marit; Moum, Torbjorn; Engdahl, Bo

    2008-07-01

    The objective of the present survey was to study self-reported sleep disturbances due to railway noise with respect to nighttime equivalent noise level (L(p,A,eq,night)) and maximum noise level (L(p,A,max)). A sample of 1349 people in and around Oslo in Norway exposed to railway noise was studied in a cross-sectional survey to obtain data on sleep disturbances, sleep problems due to noise, and personal characteristics including noise sensitivity. Individual noise exposure levels were determined outside of the bedroom facade, the most-exposed facade, and inside the respondents' bedrooms. The exposure-response relationships were analyzed by using logistic regression models, controlling for possible modifying factors including the number of noise events (train pass-by frequency). L(p,A,eq,night) and L(p,A,max) were significantly correlated, and the proportion of reported noise-induced sleep problems increased as both L(p,A,eq,night) and L(p,A,max) increased. Noise sensitivity, type of bedroom window, and pass-by frequency were significant factors affecting noise-induced sleep disturbances, in addition to the noise exposure level. Because about half of the study population did not use a bedroom at the most-exposed side of the house, the exposure-response curve obtained by using noise levels for the most-exposed facade underestimated noise-induced sleep disturbance for those who actually have their bedroom at the most-exposed facade.

  11. [The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].

    Science.gov (United States)

    Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R

    1996-02-01

    To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.

  12. Maximum-performance fiber-optic irradiation with nonimaging designs.

    Science.gov (United States)

    Fang, Y; Feuermann, D; Gordon, J M

    1997-10-01

    A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  14. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  15. Estimation of initiating event frequency for external flood events by extreme value theorem

    International Nuclear Information System (INIS)

    Chowdhury, Sourajyoti; Ganguly, Rimpi; Hari, Vibha

    2017-01-01

    External flood is an important common cause initiating event in nuclear power plants (NPPs). It may potentially lead to severe core damage (SCD) by first causing the failure of the systems required for maintaining the heat sinks and then by contributing to failures of engineered systems designed to mitigate such failures. The sample NPP taken here is twin 220 MWe Indian standard pressurized heavy water reactor (PHWR) situated inland. A comprehensive in-house Level-1 internal event PSA for full power had already been performed. External flood assessment was further conducted in area of external hazard risk assessment in response to post-Fukushima measures taken in nuclear industries. The present paper describes the methodology to calculate initiating event (IE) frequency for external flood events for the sample inland Indian NPP. General extreme value (GEV) theory based on maximum likelihood method (MLM) and order statistics approach (OSA) is used to analyse the rainfall data for the site. Thousand-year return level and necessary return periods for extreme rainfall are evaluated. These results along with plant-specific topographical calculations quantitatively establish that external flooding resulting from upstream dam break, river flooding and heavy rainfall (flash flood) would be unlikely for the sample NPP in consideration.

  16. Thermal modeling of core sampling in flammable gas waste tanks. Part 1: Push-mode sampling

    International Nuclear Information System (INIS)

    Unal, C.; Stroh, K.; Pasamehmetoglu, K.O.

    1997-01-01

    The radioactive waste stored in underground storage tanks at Hanford site is routinely being sampled for waste characterization purposes. The push- and rotary-mode core sampling is one of the sampling methods employed. The waste includes mixtures of sodium nitrate and sodium nitrite with organic compounds that can produce violent exothermic reactions if heated above 160 C during core sampling. A self-propagating waste reaction would produce very high temperatures that eventually result in failure of the tank and radioactive material releases to environment. A two-dimensional thermal model based on a lumped finite volume analysis method is developed. The enthalpy of each node is calculated from the first law of thermodynamics. A flash temperature and effective contact area concept were introduced to account the interface temperature rise. No maximum temperature rise exceeding the critical value of 60 C was found in the cases studied for normal operating conditions. Several accident conditions are also examined. In these cases it was found that the maximum drill bit temperature remained below the critical reaction temperature as long as a 30 scfm purge flow is provided the push-mode drill bit during sampling in rotary mode. The failure to provide purge flow resulted in exceeding the limiting temperatures in a relatively short time

  17. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  18. Dual-etalon, cavity-ring-down, frequency comb spectroscopy.

    Energy Technology Data Exchange (ETDEWEB)

    Strecker, Kevin E.; Chandler, David W.

    2010-10-01

    The 'dual etalon frequency comb spectrometer' is a novel low cost spectometer with limited moving parts. A broad band light source (pulsed laser, LED, lamp ...) is split into two beam paths. One travels through an etalon and a sample gas, while the second arm is just an etalon cavity, and the two beams are recombined onto a single detector. If the free spectral ranges (FSR) of the two cavities are not identical, the intensity pattern at the detector with consist of a series of heterodyne frequencies. Each mode out of the sample arm etalon with have a unique frequency in RF (radio-frequency) range, where modern electronics can easily record the signals. By monitoring these RF beat frequencies we can then determine when an optical frequencies is absorbed. The resolution is set by the FSR of the cavity, typically 10 MHz, with a bandwidth up to 100s of cm{sup -1}. In this report, the new spectrometer is described in detail and demonstration experiments on Iodine absorption are carried out. Further we discuss powerful potential next generation steps to developing this into a point sensor for monitoring combustion by-products, environmental pollutants, and warfare agents.

  19. High-efficiency frequency doubling of continuous-wave laser light.

    Science.gov (United States)

    Ast, Stefan; Nia, Ramon Moghadas; Schönbeck, Axel; Lastzka, Nico; Steinlechner, Jessica; Eberle, Tobias; Mehmet, Moritz; Steinlechner, Sebastian; Schnabel, Roman

    2011-09-01

    We report on the observation of high-efficiency frequency doubling of 1550 nm continuous-wave laser light in a nonlinear cavity containing a periodically poled potassium titanyl phosphate crystal (PPKTP). The fundamental field had a power of 1.10 W and was converted into 1.05 W at 775 nm, yielding a total external conversion efficiency of 95±1%. The latter value is based on the measured depletion of the fundamental field being consistent with the absolute values derived from numerical simulations. According to our model, the conversion efficiency achieved was limited by the nonperfect mode matching into the nonlinear cavity and by the nonperfect impedance matching for the maximum input power available. Our result shows that cavity-assisted frequency conversion based on PPKTP is well suited for low-decoherence frequency conversion of quantum states of light.

  20. Flood Frequency Analysis For Partial Duration Series In Ganjiang River Basin

    Science.gov (United States)

    zhangli, Sun; xiufang, Zhu; yaozhong, Pan

    2016-04-01

    Accurate estimation of flood frequency is key to effective, nationwide flood damage abatement programs. The partial duration series (PDS) method is widely used in hydrologic studies because it considers all events above a certain threshold level as compared to the annual maximum series (AMS) method, which considers only the annual maximum value. However, the PDS has a drawback in that it is difficult to define the thresholds and maintain an independent and identical distribution of the partial duration time series; this drawback is discussed in this paper. The Ganjiang River is the seventh largest tributary of the Yangtze River, the longest river in China. The Ganjiang River covers a drainage area of 81,258 km2 at the Wanzhou hydrologic station as the basin outlet. In this work, 56 years of daily flow data (1954-2009) from the Wanzhou station were used to analyze flood frequency, and the Pearson-III model was employed as the hydrologic probability distribution. Generally, three tasks were accomplished: (1) the threshold of PDS by percentile rank of daily runoff was obtained; (2) trend analysis of the flow series was conducted using PDS; and (3) flood frequency analysis was conducted for partial duration flow series. The results showed a slight upward trend of the annual runoff in the Ganjiang River basin. The maximum flow with a 0.01 exceedance probability (corresponding to a 100-year flood peak under stationary conditions) was 20,000 m3/s, while that with a 0.1 exceedance probability was 15,000 m3/s. These results will serve as a guide to hydrological engineering planning, design, and management for policymakers and decision makers associated with hydrology.

  1. The non-equilibrium response of a superconductor to pair-breaking radiation measured over a broad frequency band

    NARCIS (Netherlands)

    De Visser, P.J.; Yates, S.J.C.; Guruswamy, T.; Goldie, D.J.; Withington, S.; Neto, A.; Llombart, N.; Baryshev, A.M.; Klapwijk, T.M.; Baselmans, J.J.A.

    2015-01-01

    We have measured the absorption of terahertz radiation in a BCS superconductor over a broad range of frequencies from 200 GHz to 1.1 THz, using a broadband antenna-lens system and a tantalum microwave resonator. From low frequencies, the response of the resonator rises rapidly to a maximum at the

  2. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    Science.gov (United States)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  3. Frequencies of chromosome aberration on radiation workers

    International Nuclear Information System (INIS)

    Yanti Lusiyanti; Zubaidah Alatas

    2016-01-01

    Radiation exposure of the body can cause damage to the genetic material in cells (cytogenetic) in the form of changes in the structure or chromosomal aberrations in peripheral blood lymphocytes. Chromosomal aberrations can be unstable as dicentric and ring chromosomes, and is stable as translocation. Dicentric chromosome is the gold standard biomarker due to radiation exposure, and chromosome translocation is a biomarker for retrospective biodosimetry. The aim of this studi is to conduct examination of chromosomal aberrations in the radiation worker to determine the potential damage of cell that may arise due to occupational radiation exposure. The examination have been carried out on blood samples from 55 radiation workers in the range of 5-30 year of service. Chromosome aberration frequency measurement starts with blood sampling, culturing, harvesting, slide preparations, and lymphocyte chromosome staining with Giemsa and painting with Fluorescence In Situ Hybridization (FISH) technique. The results showed that chromosomal translocations are not found in blood samples radiation workers and dicentric chromosomes found only on 2 blood samples of radiation workers with a frequency of 0.001/cell. The frequency of chromosomal aberrations in the blood cells such workers within normal limits and this means that the workers have been implemented a radiation safety aspects very well. (author)

  4. PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS

    Energy Technology Data Exchange (ETDEWEB)

    He, Shiyuan; Huang, Jianhua Z.; Long, James [Department of Statistics, Texas A and M University, College Station, TX (United States); Yuan, Wenlong; Macri, Lucas M., E-mail: lmacri@tamu.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX (United States)

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.

  5. Environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1991-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Ground-Water Monitoring Project. The routine sampling plan for the SESP has been revised this year to reflect changing site operations and priorities. Some sampling previously performed at least annually has been reduced in frequency, and some new sampling to be performed at a less than annual frequency has been added. Therefore, the SESP schedule reflects sampling to be conducted in calendar year 1991 as well as future years. The ground-water sampling schedule is for 1991. This schedule is subject to modification during the year in response to changes in Site operation, program requirements, and the nature of the observed results. Operational limitations such as weather, mechanical failures, sample availability, etc., may also require schedule modifications. Changes will be documented in the respective project files, but this plan will not be reissued. The purpose of these monitoring projects is to evaluate levels of radioactive and nonradioactive pollutants in the Hanford evirons

  6. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L.E.

    1991-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Ground-Water Monitoring Project. The routine sampling plan for the SESP has been revised this year to reflect changing site operations and priorities. Some sampling previously performed at least annually has been reduced in frequency, and some new sampling to be performed at a less than annual frequency has been added. Therefore, the SESP schedule reflects sampling to be conducted in calendar year 1991 as well as future years. The ground-water sampling schedule is for 1991. This schedule is subject to modification during the year in response to changes in Site operation, program requirements, and the nature of the observed results. Operational limitations such as weather, mechanical failures, sample availability, etc., may also require schedule modifications. Changes will be documented in the respective project files, but this plan will not be reissued. The purpose of these monitoring projects is to evaluate levels of radioactive and nonradioactive pollutants in the Hanford evirons.

  7. Joint fundamental frequency and order estimation using optimal filtering

    Directory of Open Access Journals (Sweden)

    Jakobsson Andreas

    2011-01-01

    Full Text Available Abstract In this paper, the problem of jointly estimating the number of harmonics and the fundamental frequency of periodic signals is considered. We show how this problem can be solved using a number of methods that either are or can be interpreted as filtering methods in combination with a statistical model selection criterion. The methods in question are the classical comb filtering method, a maximum likelihood method, and some filtering methods based on optimal filtering that have recently been proposed, while the model selection criterion is derived herein from the maximum a posteriori principle. The asymptotic properties of the optimal filtering methods are analyzed and an order-recursive efficient implementation is derived. Finally, the estimators have been compared in computer simulations that show that the optimal filtering methods perform well under various conditions. It has previously been demonstrated that the optimal filtering methods perform extremely well with respect to fundamental frequency estimation under adverse conditions, and this fact, combined with the new results on model order estimation and efficient implementation, suggests that these methods form an appealing alternative to classical methods for analyzing multi-pitch signals.

  8. Reliability of mechanisms with periodic random modal frequencies using an extreme value-based approach

    International Nuclear Information System (INIS)

    Savage, Gordon J.; Zhang, Xufang; Son, Young Kap; Pandey, Mahesh D.

    2016-01-01

    Resonance in a dynamic system is to be avoided since it often leads to impaired performance, overstressing, fatigue fracture and adverse human reactions. Thus, it is necessary to know the modal frequencies and ensure they do not coincide with any applied periodic loadings. For a rotating planar mechanism, the coefficients in the mass and stiffness matrices are periodically varying, and if the underlying geometry and material properties are treated as random variables then the modal frequencies are both position-dependent and probabilistic. The avoidance of resonance is now a complex problem. Herein, free vibration analysis helps determine ranges of modal frequencies that in turn, identify the running speeds of the mechanism to be avoided. This paper presents an efficient and accurate sample-based approach to determine probabilistic minimum and maximum extremes of the fundamental frequencies and the angular positions of their occurrence. Then, given critical lower and upper frequency constraints it is straightforward to determine reliability in terms of probability of exceedance. The novelty of the proposed approach is that the original expensive and implicit mechanistic model is replaced by an explicit meta-model that captures the tolerances of the design variables over the entire range of angular positions: position-dependent eigenvalues can be found easily and quickly. Extreme-value statistics of the modal frequencies and extreme-value statistics of the angular positions are readily computed through MCS. Limit-state surfaces that connect the frequencies to the design variables may be easily constructed. Error analysis identifies three errors and the paper presents ways to control them so the methodology can be sufficiently accurate. A numerical example of a flexible four-bar linkage shows the proposed methodology has engineering applications. The impact of the proposed methodology is two-fold: it presents a safe-side analysis based on free vibration methods to

  9. Detrimental Effect Elimination of Laser Frequency Instability in Brillouin Optical Time Domain Reflectometer by Using Self-Heterodyne Detection

    Directory of Open Access Journals (Sweden)

    Yongqian Li

    2017-03-01

    Full Text Available A useful method for eliminating the detrimental effect of laser frequency instability on Brillouin signals by employing the self-heterodyne detection of Rayleigh and Brillouin scattering is presented. From the analysis of Brillouin scattering spectra from fibers with different lengths measured by heterodyne detection, the maximum usable pulse width immune to laser frequency instability is obtained to be about 4 µs in a self-heterodyne detection Brillouin optical time domain reflectometer (BOTDR system using a broad-band laser with low frequency stability. Applying the self-heterodyne detection of Rayleigh and Brillouin scattering in BOTDR system, we successfully demonstrate that the detrimental effect of laser frequency instability on Brillouin signals can be eliminated effectively. Employing the broad-band laser modulated by a 130-ns wide pulse driven electro-optic modulator, the observed maximum errors in temperatures measured by the local heterodyne and self-heterodyne detection BOTDR systems are 7.9 °C and 1.2 °C, respectively.

  10. Modulation of electromagnetic and absorption properties in 18-26.5 GHz frequency range of strontium hexaferrites with doping of cobalt-zirconium

    Science.gov (United States)

    Pubby, Kunal; Narang, Sukhleen Bindra; Kaur, Prabhjyot; Chawla, S. K.

    2017-05-01

    Hexaferrite nano-particles of stoichiometric composition {{Sr}}{({{CoZr}})_x}{{F}}{{{e}}_{12 - 2x}}{{{O}}_{19}}, with x = 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 were prepared using sol-gel auto-combustion route owing to its advantages such as low sintering temperature requirement, homogeneity and uniformity of grains. Tartaric acid as a fuel was utilized to complete the chemical reaction. The goal of this study is to analyse the effect of co-substitution of cobalt and zirconium on the electromagnetic and absorption properties of pure {{SrF}}{{{e}}_{12}}{{{O}}_{19}} hexaferrite. The properties were measured on the rectangular pellets of thickness 2.5 mm for K-frequency band using Vector Network Analyzer. The doping of Co-Zr has resulted in increase in real as well as imaginary parts of permittivity. The values of real permittivity lie in the range 3.6-7.0 for all the composition. The real part of permeability remains in range 0.7-1.6 in the studied frequency band for all the samples and shows slightly increasing trend with frequency. The maximum values of dielectric loss tangent peak (3.04) and magnetic loss tangent peak (2.34), among all the prepared compositions, have been observed for composition x = 0.2. Compositions with x = 0.6 and x = 0.0 also have high dielectric and magnetic loss peaks. Dielectric loss peaks are attributed to dielectric resonance and magnetic loss peaks are attributed to natural resonance. Experimentally determined reflection loss results show that all six compositions of prepared series have high values of absorption to propose them as single-layer absorbers in 18-26.5 GHz frequency range. The composition with x = 0.2 has maximum absorption capacity with reflection loss peak of -37.2 dB at 24.3 GHz frequency. The undoped composition also has high absorption peak (-25.46 dB), but -10 dB absorption bandwidth is minimum (2.2 GHz) out of the present series. Maximum absorption bandwidth is obtained for x = 1.0 (4.1 GHz). Other doped compositions also

  11. Modulation of electromagnetic and absorption properties in 18-26.5 GHz frequency range of strontium hexaferrites with doping of cobalt-zirconium

    Energy Technology Data Exchange (ETDEWEB)

    Pubby, Kunal; Narang, Sukhleen Bindra [Guru Nanak Dev University, Department of Electronics Technology, Amritsar (India); Kaur, Prabhjyot; Chawla, S.K. [Guru Nanak Dev University, Department of Chemistry, Centre for Advanced Studies-I, Amritsar (India)

    2017-05-15

    Hexaferrite nano-particles of stoichiometric composition Sr(CoZr){sub x}Fe{sub 12-2x}O{sub 19}, with x = 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 were prepared using sol-gel auto-combustion route owing to its advantages such as low sintering temperature requirement, homogeneity and uniformity of grains. Tartaric acid as a fuel was utilized to complete the chemical reaction. The goal of this study is to analyse the effect of co-substitution of cobalt and zirconium on the electromagnetic and absorption properties of pure SrFe{sub 12}O{sub 19} hexaferrite. The properties were measured on the rectangular pellets of thickness 2.5 mm for K-frequency band using Vector Network Analyzer. The doping of Co-Zr has resulted in increase in real as well as imaginary parts of permittivity. The values of real permittivity lie in the range 3.6-7.0 for all the composition. The real part of permeability remains in range 0.7-1.6 in the studied frequency band for all the samples and shows slightly increasing trend with frequency. The maximum values of dielectric loss tangent peak (3.04) and magnetic loss tangent peak (2.34), among all the prepared compositions, have been observed for composition x = 0.2. Compositions with x = 0.6 and x = 0.0 also have high dielectric and magnetic loss peaks. Dielectric loss peaks are attributed to dielectric resonance and magnetic loss peaks are attributed to natural resonance. Experimentally determined reflection loss results show that all six compositions of prepared series have high values of absorption to propose them as single-layer absorbers in 18-26.5 GHz frequency range. The composition with x = 0.2 has maximum absorption capacity with reflection loss peak of -37.2 dB at 24.3 GHz frequency. The undoped composition also has high absorption peak (-25.46 dB), but -10 dB absorption bandwidth is minimum (2.2 GHz) out of the present series. Maximum absorption bandwidth is obtained for x = 1.0 (4.1 GHz). Other doped compositions also have high absorption bandwidth

  12. INVESTIGATION OF THE FREQUENCY-TEMPERATURE RELATIONSHIP OF THE DIELECTRIC PERMITTIVITY OF THE PZT PIEZOCERAMICS IN THE LOW FREQUENCY RANGE

    Directory of Open Access Journals (Sweden)

    A. I. ZOLOTAREVSKIY

    2018-05-01

    Full Text Available Purpose. To investigate the frequency-temperature relationship of the dielectric permittivity of PZT piezoceramics in the low frequency range. Methodology. To obtain the frequency-temperature relationship of the dielectric permittivity of the PZT piezoceramics, a technique was used to determine the capacitance of the capacitor, between which plates the sample was placed. The value of the dielectric permittivity of the sample was calculated from the capacitor capacitance obtained. Findings. The frequency-temperature relationship of the dielectric permittivity of the PZT piezoceramics in the low frequency range has been obtained by the authors. The dielectric permittivity is not practically related to the frequency of the alternating voltage at a low temperature, with increasing in temperature its value increases and frequency relationship is observed. The temperature relationship of the dielectric permittivity of the PZT piezoceramics is satisfactorily described by the exponential functional dependence in the low-temperature range. The activation energy of the PZT piezoceramics polarization is determined from the graph of the dependence of the logarithm of the dielectric permittivity upon the inverse temperature. Different values of the activation energy for the two temperature regions prove on the existence of different mechanisms of the PZT piezoceramics polarization in the temperature range being investigated. Originality. The authors investigated the frequency-temperature relationship of the dielectric permittivity of the PZT piezoceramics in the low-frequency range. It is established that the temperature relationship of the dielectric permittivity of the PZT piezoceramics is satisfactorily described by an exponential functional relationship in the lowtemperature range. The activation energy of polarization is determined for two temperature sections. Practical value. The research results can be used to study the mechanism of polarization of

  13. The effect of beat frequency on eye movements during free viewing.

    Science.gov (United States)

    Maróti, Emese; Knakker, Balázs; Vidnyánszky, Zoltán; Weiss, Béla

    2017-02-01

    External periodic stimuli entrain brain oscillations and affect perception and attention. It has been shown that background music can change oculomotor behavior and facilitate detection of visual objects occurring on the musical beat. However, whether musical beats in different tempi modulate information sampling differently during natural viewing remains to be explored. Here we addressed this question by investigating how listening to naturalistic drum grooves in two different tempi affects eye movements of participants viewing natural scenes on a computer screen. We found that the beat frequency of the drum grooves modulated the rate of eye movements: fixation durations were increased at the lower beat frequency (1.7Hz) as compared to the higher beat frequency (2.4Hz) and no music conditions. Correspondingly, estimated visual sampling frequency decreased as fixation durations increased with lower beat frequency. These results imply that slow musical beats can retard sampling of visual information during natural viewing by increasing fixation durations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  15. Hybrid Intelligent Control Method to Improve the Frequency Support Capability of Wind Energy Conversion Systems

    Directory of Open Access Journals (Sweden)

    Shin Young Heo

    2015-10-01

    Full Text Available This paper presents a hybrid intelligent control method that enables frequency support control for permanent magnet synchronous generators (PMSGs wind turbines. The proposed method for a wind energy conversion system (WECS is designed to have PMSG modeling and full-scale back-to-back insulated-gate bipolar transistor (IGBT converters comprising the machine and grid side. The controller of the machine side converter (MSC and the grid side converter (GSC are designed to achieve maximum power point tracking (MPPT based on an improved hill climb searching (IHCS control algorithm and de-loaded (DL operation to obtain a power margin. Along with this comprehensive control of maximum power tracking mode based on the IHCS, a method for kinetic energy (KE discharge control of the supporting primary frequency control scheme with DL operation is developed to regulate the short-term frequency response and maintain reliable operation of the power system. The effectiveness of the hybrid intelligent control method is verified by a numerical simulation in PSCAD/EMTDC. Simulation results show that the proposed approach can improve the frequency regulation capability in the power system.

  16. Dynamic nuclear polarization using frequency modulation at 3.34 T.

    Science.gov (United States)

    Hovav, Y; Feintuch, A; Vega, S; Goldfarb, D

    2014-01-01

    During dynamic nuclear polarization (DNP) experiments polarization is transferred from unpaired electrons to their neighboring nuclear spins, resulting in dramatic enhancement of the NMR signals. While in most cases this is achieved by continuous wave (cw) irradiation applied to samples in fixed external magnetic fields, here we show that DNP enhancement of static samples can improve by modulating the microwave (MW) frequency at a constant field of 3.34 T. The efficiency of triangular shaped modulation is explored by monitoring the (1)H signal enhancement in frozen solutions containing different TEMPOL radical concentrations at different temperatures. The optimal modulation parameters are examined experimentally and under the most favorable conditions a threefold enhancement is obtained with respect to constant frequency DNP in samples with low radical concentrations. The results are interpreted using numerical simulations on small spin systems. In particular, it is shown experimentally and explained theoretically that: (i) The optimal modulation frequency is higher than the electron spin-lattice relaxation rate. (ii) The optimal modulation amplitude must be smaller than the nuclear Larmor frequency and the EPR line-width, as expected. (iii) The MW frequencies corresponding to the enhancement maxima and minima are shifted away from one another when using frequency modulation, relative to the constant frequency experiments. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Determination of maximum power transfer conditions of bimorph piezoelectric energy harvesters

    KAUST Repository

    Ahmad, Mahmoud Al

    2012-07-23

    In this paper, a method to find the maximum power transfer conditions in bimorph piezoelectric-based harvesters is proposed. Explicitly, we derive a closed form expression that relates the load resistance to the mechanical parameters describing the bimorph based on the electromechanical, single degree of freedom, analogy. Further, by taking into account the intrinsic capacitance of the piezoelectric harvester, a more descriptive expression of the resonant frequency in piezoelectric bimorphs was derived. In interest of impartiality, we apply the proposed philosophy on previously published experimental results and compare it with other reported hypotheses. It was found that the proposed method was able to predict the actual optimum load resistance more accurately than other methods reported in the literature. © 2012 American Institute of Physics.

  18. Characterization of Passive Spectral Regrowth in Radio Frequency Systems

    Science.gov (United States)

    2013-01-01

    as using RF absorber and Faraday cages around sensi- tive spots. To ensure maximum radiated isolation, each cable or component should be shielded...nonlinear effects of spectral-regrowth-generating phenomena on an RF signal. Detection of low-level passive spectral regrowth close in frequency to a...experimentally and analytically characterize the nonlinear effects of spectral- regrowth-generating phenomena on an RF signal. Detection of low-level passive

  19. Effects of state and trait factors on nightmare frequency.

    Science.gov (United States)

    Schredl, Michael

    2003-10-01

    In a new approach, this study compared the effects of trait and state factors on nightmare frequency in a non-clinical sample. Although neuroticism and boundary thinness were related to nightmare frequency, regression analyses indicated that the trait measures did not add to the variance explained by the state measures. This finding supports the so-called continuity hypothesis of dreaming, i. e., nightmares reflect negative waking-life experiences. Second, the moderate relationship between nightmare frequency and poor sleep quality was partly explained by the day-time measures of neuroticism and stress, but it can be assumed that nightmares are an independent factor contributing to complaints of insomnia. Longitudinal studies measuring nightmare frequency and stress on a daily basis will shed light on the temporal relationship between daytime measures and the occurrence of nightmares. It will be also very interesting to study the relationship between stress and nightmare frequency in a sample who have undergone cognitive-behavioral treatment for nightmares.

  20. Generalized sampling in Julia

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Nielsen, Morten; Rasmussen, Morten Grud

    2017-01-01

    Generalized sampling is a numerically stable framework for obtaining reconstructions of signals in different bases and frames from their samples. For example, one can use wavelet bases for reconstruction given frequency measurements. In this paper, we will introduce a carefully documented toolbox...... for performing generalized sampling in Julia. Julia is a new language for technical computing with focus on performance, which is ideally suited to handle the large size problems often encountered in generalized sampling. The toolbox provides specialized solutions for the setup of Fourier bases and wavelets....... The performance of the toolbox is compared to existing implementations of generalized sampling in MATLAB....

  1. Temporal and spatial variation of maximum wind speed days during the past 20 years in major cities of Xinjiang

    Science.gov (United States)

    Baidourela, Aliya; Jing, Zhen; Zhayimu, Kahaer; Abulaiti, Adili; Ubuli, Hakezi

    2018-04-01

    Wind erosion and sandstorms occur in the neighborhood of exposed dust sources. Wind erosion and desertification increase the frequency of dust storms, deteriorate air quality, and damage the ecological environment and agricultural production. The Xinjiang region has a relatively fragile ecological environment. Therefore, the study of the characteristics of maximum wind speed and wind direction in this region is of great significance to disaster prevention and mitigation, the management of activated dunes, and the sustainable development of the region. Based on the latest data of 71 sites in Xinjiang, this study explores the temporal evolution and spatial distribution of maximum wind speed in Xinjiang from 1993 to 2013, and highlights the distribution of annual and monthly maximum wind speed and the characteristics of wind direction in Xinjiang. Between 1993 and 2013, Ulugchat County exhibited the highest number of days with the maximum wind speed (> 17 m/s), while Wutian exhibited the lowest number. In Xinjiang, 1999 showed the highest number of maximum wind speed days (257 days), while 2013 showed the lowest number (69 days). Spring and summer wind speeds were greater than those in autumn and winter. There were obvious differences in the direction of maximum wind speed in major cities and counties of Xinjiang. East of the Tianshan Mountains, maximum wind speeds are mainly directed southeast and northeast. North and south of the Tianshan Mountains, they are mainly directed northwest and northeast, while west of the Tianshan Mountains, they are mainly directed southeast and northwest.

  2. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  3. High efficiency single frequency 355 nm all-solid-state UV laser

    International Nuclear Information System (INIS)

    Xie, Xiaobing; Wei, Daikang; Ma, Xiuhua; Li, Shiguang; Liu, Jiqiao; Zhu, Xiaolei; Chen, Weibiao

    2016-01-01

    A novel conductively cooled high energy single-frequency 355 nm all-solid-state UV laser is presented based on sum-frequency mixing technique. In this system, a pulsed seeder laser at 1064 nm wavelength, modulated by an AOM, is directly amplified by the cascaded multi-stage hybrid laser amplifiers, and two LBO crystals are used for the SHG and SFG, finally a maximum UV pulse energy of 226 mJ at 355 nm wavelength is achieved with frequency-tripled conversion efficiency as high as 55%, the pulse width is around 12.2 ns at the repetition frequency of 30 Hz. The beam quality factor M 2 of the output UV laser is measured to be 2.54 and 2.98 respectively in two orthogonal directions. (paper)

  4. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    Science.gov (United States)

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  5. Using snowflake surface-area-to-volume ratio to model and interpret snowfall triple-frequency radar signatures

    Directory of Open Access Journals (Sweden)

    M. Gergely

    2017-10-01

    Full Text Available The snowflake microstructure determines the microwave scattering properties of individual snowflakes and has a strong impact on snowfall radar signatures. In this study, individual snowflakes are represented by collections of randomly distributed ice spheres where the size and number of the constituent ice spheres are specified by the snowflake mass and surface-area-to-volume ratio (SAV and the bounding volume of each ice sphere collection is given by the snowflake maximum dimension. Radar backscatter cross sections for the ice sphere collections are calculated at X-, Ku-, Ka-, and W-band frequencies and then used to model triple-frequency radar signatures for exponential snowflake size distributions (SSDs. Additionally, snowflake complexity values obtained from high-resolution multi-view snowflake images are used as an indicator of snowflake SAV to derive snowfall triple-frequency radar signatures. The modeled snowfall triple-frequency radar signatures cover a wide range of triple-frequency signatures that were previously determined from radar reflectivity measurements and illustrate characteristic differences related to snow type, quantified through snowflake SAV, and snowflake size. The results show high sensitivity to snowflake SAV and SSD maximum size but are generally less affected by uncertainties in the parameterization of snowflake mass, indicating the importance of snowflake SAV for the interpretation of snowfall triple-frequency radar signatures.

  6. Post-growth annealing induced change of conductivity in As-doped ZnO grown by radio frequency magnetron sputtering

    Energy Technology Data Exchange (ETDEWEB)

    To, C. K.; Yang, B.; Su, S. C.; Ling, C. C.; Beling, C. D.; Fung, S. [Department of Physics, University of Hong Kong, Pokfulam Road (Hong Kong)

    2011-12-01

    Arsenic-doped ZnO films were fabricated by radio frequency magnetron sputtering method at a relatively low substrate temperature of 200 deg. C. Post-growth annealing in air was carried out up to a temperature of 1000 deg. C. The samples were characterized by Hall measurement, positron annihilation spectroscopy (PAS), secondary ion mass spectroscopy (SIMS), and cathodoluminescence (CL). The as-grown sample was of n-type and it converted to p-type material after the 400 deg. C annealing. The resulting hole concentration was found to increase with annealing temperature and reached a maximum of 6 x 10{sup 17} cm{sup -3} at the annealing temperature of 600 deg. C. The origin of the p-type conductivity was consistent with the As{sub Zn}(V{sub Zn}){sub 2} shallow acceptor model. Further increasing the annealing temperature would decrease the hole concentration of the samples finally converted the sample back to n-type. With evidence, it was suggested that the removal of the p-type conductivity was due to the dissociation of the As{sub Zn}(V{sub Zn}){sub 2} acceptor and the creation of the deep level defect giving rise to the green luminescence.

  7. Post-growth annealing induced change of conductivity in As-doped ZnO grown by radio frequency magnetron sputtering

    Science.gov (United States)

    To, C. K.; Yang, B.; Su, S. C.; Ling, C. C.; Beling, C. D.; Fung, S.

    2011-12-01

    Arsenic-doped ZnO films were fabricated by radio frequency magnetron sputtering method at a relatively low substrate temperature of 200 °C. Post-growth annealing in air was carried out up to a temperature of 1000 °C. The samples were characterized by Hall measurement, positron annihilation spectroscopy (PAS), secondary ion mass spectroscopy (SIMS), and cathodoluminescence (CL). The as-grown sample was of n-type and it converted to p-type material after the 400 °C annealing. The resulting hole concentration was found to increase with annealing temperature and reached a maximum of 6 × 1017 cm-3 at the annealing temperature of 600 °C. The origin of the p-type conductivity was consistent with the AsZn(VZn)2 shallow acceptor model. Further increasing the annealing temperature would decrease the hole concentration of the samples finally converted the sample back to n-type. With evidence, it was suggested that the removal of the p-type conductivity was due to the dissociation of the AsZn(VZn)2 acceptor and the creation of the deep level defect giving rise to the green luminescence.

  8. Post-growth annealing induced change of conductivity in As-doped ZnO grown by radio frequency magnetron sputtering

    International Nuclear Information System (INIS)

    To, C. K.; Yang, B.; Su, S. C.; Ling, C. C.; Beling, C. D.; Fung, S.

    2011-01-01

    Arsenic-doped ZnO films were fabricated by radio frequency magnetron sputtering method at a relatively low substrate temperature of 200 deg. C. Post-growth annealing in air was carried out up to a temperature of 1000 deg. C. The samples were characterized by Hall measurement, positron annihilation spectroscopy (PAS), secondary ion mass spectroscopy (SIMS), and cathodoluminescence (CL). The as-grown sample was of n-type and it converted to p-type material after the 400 deg. C annealing. The resulting hole concentration was found to increase with annealing temperature and reached a maximum of 6 x 10 17 cm -3 at the annealing temperature of 600 deg. C. The origin of the p-type conductivity was consistent with the As Zn (V Zn ) 2 shallow acceptor model. Further increasing the annealing temperature would decrease the hole concentration of the samples finally converted the sample back to n-type. With evidence, it was suggested that the removal of the p-type conductivity was due to the dissociation of the As Zn (V Zn ) 2 acceptor and the creation of the deep level defect giving rise to the green luminescence.

  9. Optimization of carrier frequency and duty cycle for pulse modulation of biological signals.

    Science.gov (United States)

    Tandon, S N; Singh, S; Sharma, P K; Khosla, S

    1980-10-01

    Digital modulation techniques are commonly used for the recording and transmission of biological signals. Hitherto, the choice of subcarrier frequency for recording or transmission of biological signals has been arbitary and this usually results in poor signal to noise ratio (SNR) due to the limited frequency characteristics of the system. In the present study the frequency characteristics of the system (first order approximation) has been taken to be that of a Butterworth filter. Computations based on this assumption show that for a given input signal there exists an optimum subcarrier frequency and a corresponding optimum duty cycle which would give maximum SNR of the system. For convenience, a nomogram has been prepared and it has been shown that for a given frequency response of the system, the nomogram could be used for selecting an optimum subcarrier frequency and a corresponding duty cycle. The theoretical formulations have been verified with experimental work.

  10. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  11. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  12. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...

  13. Parametric Amplification Protocol for Frequency-Modulated Magnetic Resonance Force Microscopy Signals

    Science.gov (United States)

    Harrell, Lee; Moore, Eric; Lee, Sanggap; Hickman, Steven; Marohn, John

    2011-03-01

    We present data and theoretical signal and noise calculations for a protocol using parametric amplification to evade the inherent tradeoff between signal and detector frequency noise in force-gradient magnetic resonance force microscopy signals, which are manifested as a modulated frequency shift of a high- Q microcantilever. Substrate-induced frequency noise has a 1 / f frequency dependence, while detector noise exhibits an f2 dependence on modulation frequency f . Modulation of sample spins at a frequency that minimizes these two contributions typically results in a surface frequency noise power an order of magnitude or more above the thermal limit and may prove incompatible with sample spin relaxation times as well. We show that the frequency modulated force-gradient signal can be used to excite the fundamental resonant mode of the cantilever, resulting in an audio frequency amplitude signal that is readily detected with a low-noise fiber optic interferometer. This technique allows us to modulate the force-gradient signal at a sufficiently high frequency so that substrate-induced frequency noise is evaded without subjecting the signal to the normal f2 detector noise of conventional demodulation.

  14. The Sensetivity of Flood Frequency Analysis on Record Length in Continuous United States

    Science.gov (United States)

    Hu, L.; Nikolopoulos, E. I.; Anagnostou, E. N.

    2017-12-01

    In flood frequency analysis (FFA), sufficiently long data series are important to get more reliable results. Compared to return periods of interest, at-site FFA usually needs large data sets. Generally, the precision of at site estimators and time-sampling errors are associated with the length of a gauged record. In this work, we quantify the difference with various record lengths. we use generalized extreme value (GEV) and Log Pearson type III (LP3), two traditional methods on annual maximum stream flows to undertake FFA, and propose quantitative ways, relative difference in median and interquartile range (IQR) to compare the flood frequency performances on different record length from selected 350 USGS gauges, which have more than 70 years record length in Continuous United States. Also, we group those gauges into different regions separately based on hydrological unit map and discuss the geometry impacts. The results indicate that long record length can avoid imposing an upper limit on the degree of sophistication. Working with relatively longer record length may lead accurate results than working with shorter record length. Furthermore, the influence of hydrologic unites for the watershed boundary dataset on those gauges also be presented. The California region is the most sensitive to record length, while gauges in the east perform steady.

  15. Task 08/41, Low temperature loop at the RA reactor, Review IV - Maximum temperature values in the samples without forced cooling; Zadatak 08/41, Niskotemperaturna petlja u reaktoru 'RA', Pregled IV - Maksimalne temperature u uzorcima bez prinudnog hladjenja

    Energy Technology Data Exchange (ETDEWEB)

    Zaric, Z [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1961-12-15

    The quantity of heat generated in the sample was calculated in the Review III. In stationary regime the heat is transferred through the air layer between the sample and the wall of the channel to the heavy water of graphite. Certain value of maximum temperature t{sub 0} is achieved in the sample. The objective of this review is determination of this temperature. [Serbo-Croat] Kolicina toplote generisana u uzorku, izracunata u pregledu III, u ravnoteznom stanju odvodi se kroz vazdusni sloj izmedju uzorka i zida kanala na tesku vodu odnosno grafit, pri cemu se u uzorku dostize izvesna maksimalna temperatura t{sub 0}. Odredjivanje ove temperature je predmet ovog pregleda.

  16. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  17. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  18. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  19. The problem of sampling families rather than populations: Relatedness among individuals in samples of juvenile brown trout Salmo trutta L

    DEFF Research Database (Denmark)

    Hansen, Michael Møller; Eg Nielsen, Einar; Mensberg, Karen-Lise Dons

    1997-01-01

    In species exhibiting a nonrandom distribution of closely related individuals, sampling of a few families may lead to biased estimates of allele frequencies in populations. This problem was studied in two brown trout populations, based on analysis of mtDNA and microsatellites. In both samples mt......DNA haplotype frequencies differed significantly between age classes, and in one sample 17 out of 18 individuals less than 1 year of age shared one particular mtDNA haplotype. Estimates of relatedness showed that these individuals most likely represented only three full-sib families. Older trout exhibiting...

  20. Effect of asymmetrical double-pockets and gate-drain underlap on Schottky barrier tunneling FET: Ambipolar conduction vs. high frequency performance

    Science.gov (United States)

    Shaker, Ahmed; Ossaimee, Mahmoud; Zekry, A.

    2016-08-01

    In this paper, a proposed structure based on asymmetrical double pockets SB-TFET with gate-drain underlap is presented. 2D extensive modeling and simulation, using Silvaco TCAD, were carried out to study the effect of both underlap length and pockets' doping on the transistor performance. It was found that the underlap from the drain side suppresses the ambipolar conduction and doesn't enhance the high-frequency characteristics. The enhancement of the high-frequency characteristics could be realized by increasing the doping of the drain pocket over the doping of the source pocket. An optimum choice was found which gives the conditions of minimum ambipolar conduction, maximum ON current and maximum cut-off frequency. These enhancements render the device more competitive as a nanometer transistor.

  1. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  2. The effect of sampling frequency on the accuracy of estimates of milk ...

    African Journals Online (AJOL)

    The results of this study support the five-weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional bulking of individual morning and evening samples with a single evening milk sample would not compromise accuracy provided ...

  3. Partner wealth predicts self-reported orgasm frequency in a sample of Chinese women

    NARCIS (Netherlands)

    Pollet, T.V.; Nettle, D.

    There has been considerable speculation about the adaptive significance of the human female orgasm, with one hypothesis being that it promotes differential affiliation or conception with high-quality males. We investigated the relationship between women's self-reported orgasm frequency and the

  4. Autopilot for frequency-modulation atomic force microscopy.

    Science.gov (United States)

    Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri

    2015-10-01

    One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.

  5. Autopilot for frequency-modulation atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il [Department of Physics and the Russell Berrie Nanotechnology Institute, Technion - Israel Institute of Technology, Haifa 32000 (Israel)

    2015-10-15

    One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.

  6. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  7. Least squares autoregressive (maximum entropy) spectral estimation for Fourier spectroscopy and its application to the electron cyclotron emission from plasma

    International Nuclear Information System (INIS)

    Iwama, N.; Inoue, A.; Tsukishima, T.; Sato, M.; Kawahata, K.

    1981-07-01

    A new procedure for the maximum entropy spectral estimation is studied for the purpose of data processing in Fourier transform spectroscopy. The autoregressive model fitting is examined under a least squares criterion based on the Yule-Walker equations. An AIC-like criterion is suggested for selecting the model order. The principal advantage of the new procedure lies in the enhanced frequency resolution particularly for small values of the maximum optical path-difference of the interferogram. The usefulness of the procedure is ascertained by some numerical simulations and further by experiments with respect to a highly coherent submillimeter wave and the electron cyclotron emission from a stellarator plasma. (author)

  8. Attenuation measurements of ultrasound in a kaolin-water slurry. A linear dependence upon frequency

    International Nuclear Information System (INIS)

    Greenwood, M.S.; Mai, J.L.; Good, M.S.

    1993-01-01

    The attenuation of ultrasound through a kaolin-water slurry was measured for frequencies ranging from 0.5 to 3.0 MHz. The maximum concentration of the slurry was for a weight percentage of 44% (or a volume fraction of 0.24). The goal of these measurements was to assess the feasibility of using ultrasonic attenuation to determine the concentration of a slurry of known composition. The measurements were obtained by consecutively adding kaolin to the slurry and measuring the attenuation at each concentration. After reaching a maximum concentration a dilution technique was used, in which an amount of slurry was removed and water was added, to obtain the attenuation as a function of the concentration. The dilution technique was the more effective method to obtain calibration data. These measurements were carried out using two transducers, having a center frequency of 2.25 MHz, separated by 0.1016m (4.0 in.). The maximum attenuation measured in these experiments was about 100Np/m, but the experimental apparatus has the capability of measuring a larger attenuation if the distance between the two transducers is decreased. For a given frequency, the data show that ln V/V 0 depends linearly upon the volume fraction (V is the received voltage for the slurry and V 0 is that obtained for water). This indicated that each particle acts independently in attenuating ultrasound. 12 refs., 7 figs., 3 tabs

  9. A Study on Regional Frequency Analysis using Artificial Neural Network - the Sumjin River Basin

    Science.gov (United States)

    Jeong, C.; Ahn, J.; Ahn, H.; Heo, J. H.

    2017-12-01

    Regional frequency analysis means to make up for shortcomings in the at-site frequency analysis which is about a lack of sample size through the regional concept. Regional rainfall quantile depends on the identification of hydrologically homogeneous regions, hence the regional classification based on hydrological homogeneous assumption is very important. For regional clustering about rainfall, multidimensional variables and factors related geographical features and meteorological figure are considered such as mean annual precipitation, number of days with precipitation in a year and average maximum daily precipitation in a month. Self-Organizing Feature Map method which is one of the artificial neural network algorithm in the unsupervised learning techniques solves N-dimensional and nonlinear problems and be shown results simply as a data visualization technique. In this study, for the Sumjin river basin in South Korea, cluster analysis was performed based on SOM method using high-dimensional geographical features and meteorological factor as input data. then, for the results, in order to evaluate the homogeneity of regions, the L-moment based discordancy and heterogeneity measures were used. Rainfall quantiles were estimated as the index flood method which is one of regional rainfall frequency analysis. Clustering analysis using SOM method and the consequential variation in rainfall quantile were analyzed. This research was supported by a grant(2017-MPSS31-001) from Supporting Technology Development Program for Disaster Management funded by Ministry of Public Safety and Security(MPSS) of the Korean government.

  10. An experimental study on hepatic ablation using an expandable radio-frequency needle electrode

    International Nuclear Information System (INIS)

    Choi, Dong Il; Lim, Hyo Keun; Park, Jong Min; Kang, Bo Kyung; Woo, Ji Young; Jang, Hyun Jung; Kim, Seung Hoon; Lee, Won Jae; Park, Cheol Keun; Heo, Jin Seok

    1999-01-01

    The purpose of this study was to determine the factors influencing on the size of thermal lesions after ablation using an expendable radio-frequency needle electrode in porcine liver. Ablation procedures involved the use of a monopolar radio-frequency generator and 15-G needle electrodes with four and seven retractable hooks (RITA Medical System, Mountain View, Cal., U.S.A.). The ablation protocol in fresh porcine liver comprised of combinations of varying hook deployment, highest set temperature, and ablation time. Following ablation, the maximum diameter of all thermal lesions was measured on a longitudinal section of the specimen. Ten representive lesions were examined by an experienced pathologist. At 3-cm hook deployment of the needle electrode with four lateral hooks, the size of spherical thermal lesions increased substantially with increases in the highest set temperature and ablation time until 11 minutes. After 11 minutes lesion size remained similar, with a maximum diameter of 3.3 cm. At 2-cm hook deployment, sizes decreased to about 2/3 of those at 3 cm , and at 1-cm hook deployment lesions were oblong. At 3-cm hook deployment of a needle electrode with seven hooks, the size of thermal lesions increased with increasing ablation time until 14 minutes, and the maximum diameter was 4.1 cm. Microscopic examination showed a wide zone of degeneration and focal coagulation necrosis. The size of thermal lesions produced by the use of an expandable radio-frequency needle electrode were predictable, varying according to degree of hook deployment, highest set temperature, and ablation time

  11. MaxEnt queries and sequential sampling

    International Nuclear Information System (INIS)

    Riegler, Peter; Caticha, Nestor

    2001-01-01

    In this paper we pose the question: After gathering N data points, at what value of the control parameter should the next measurement be done? We propose an on-line algorithm which samples optimally by maximizing the gain in information on the parameters to be measured. We show analytically that the information gain is maximum for those potential measurements whose outcome is most unpredictable, i.e. for which the predictive distribution has maximum entropy. The resulting algorithm is applied to exponential analysis

  12. Sensitivity studies of a seismically isolated system to low frequency amplification

    International Nuclear Information System (INIS)

    Wu, T.S.; Seidensticker, R.W.

    1987-06-01

    Responses of a seismically isolated structure to earthquake motions will depend primarily on the input ground motion and the isolation system frequency. The isolation frequency generally is relatively low when isolating against horizontal ground motions. After installation, the isolation frequency could deviate from its designed value due to aging, manufacturing tolerance etc. In addition, under cettain soil conditions, the input motion could have high energy content at relatively low frequencies. This report covers the first of these two concerns by performing a sensitivity study of the variations in isolation frequency on the responses of a nuclear reactor module incorporated with an isolation system. Results from a number of ground motions have shown that, for most earthquake motions, a higher isolation frequency tends to yield higher maximum acceleration, higher transmitted shear force, and lower relative displacement between the isolated and unisolated parts of the structure. In one of the ground motions considered, a 7% increase in the isolation frequency from its original design value is observed to give over a 22% increase in the transmitted shear force. Other ground motions, especially those exhibiting sharp rise in spectral accelerations in the vicinity of the designed isolated frequency, yield responses following the same general trend

  13. D1S80 (pMCT118) allele frequencies in a Malay population sample from Malaysia.

    Science.gov (United States)

    Koh, C L; Lim, M E; Ng, H S; Sam, C K

    1997-01-01

    The D1S80 allele frequencies in 124 unrelated Malays from the Malaysian population were determined and 51 genotypes and 19 alleles were encountered. The D1S80 frequency distribution met Hardy-Weinberg expectations. The observed heterozygosity was 0.80 and the power of discrimination was 0.96.

  14. Preliminary flood-duration frequency estimates using naturalized streamflow records for the Willamette River Basin, Oregon

    Science.gov (United States)

    Lind, Greg D.; Stonewall, Adam J.

    2018-02-13

    In this study, “naturalized” daily streamflow records, created by the U.S. Army Corps of Engineers and the Bureau of Reclamation, were used to compute 1-, 3-, 7-, 10-, 15-, 30-, and 60-day annual maximum streamflow durations, which are running averages of daily streamflow for the number of days in each duration. Once the annual maximum durations were computed, the floodduration frequencies could be estimated. The estimated flood-duration frequencies correspond to the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent probabilities of their occurring or being exceeded each year. For this report, the focus was on the Willamette River Basin in Oregon, which is a subbasin of the Columbia River Basin. This study is part of a larger one encompassing the entire Columbia Basin.

  15. Electrodiagnostic applications of somatosensory evoked high-frequency EEG oscillations: Technical considerations.

    Science.gov (United States)

    Simpson, A J; Cunningham, M O; Baker, M R

    2018-03-01

    High frequency oscillations (HFOs) embedded within the somatosensory evoked potential (SEP) are not routinely recorded/measured as part of standard clinical SEPs. However, HFOs could provide important additional diagnostic/prognostic information in various patient groups in whom SEPs are tested routinely. One area is the management of patients with hypoxic ischaemic encephalopathy (HIE) in the intensive care unit (ICU). However, the sensitivity of standard clinical SEP recording techniques for detecting HFOs is unknown. SEPs were recorded using routine clinical methods in 17 healthy subjects (median nerve stimulation; 0.5 ms pulse width; 5 Hz; maximum 4000 stimuli) in an unshielded laboratory. Bipolar EEG recordings were acquired (gain 50 k; bandpass 3Hz-2 kHz; sampling rate 5 kHz; non-inverting electrode 2 cm anterior to C3/C4; inverting electrode 2 cm posterior to C3/C4). Data analysis was performed in MATLAB. SEP-HFOs were detected in 65% of controls using standard clinical recording techniques. In 3 controls without significant HFOs, experiments were repeated using a linear electrode array with higher spatial sampling frequency. SEP-HFOs were observed in all 3 subjects. Currently standard clinical methods of recording SEPs are not sufficiently sensitive to permit the inclusion of SEP-HFOs in routine clinical diagnostic/prognostic assessments. Whilst an increase in the number/density of EEG electrodes should improve the sensitivity for detecting SEP-HFOs, this requires confirmation. By improving and standardising clinical SEP recording protocols to permit the acquisition/analysis of SEP-HFOs, it should be possible to gain important insights into the pathophysiology of neurological disorders and refine the management of conditions such as HIE. Copyright © 2018. Published by Elsevier Inc.

  16. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  17. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  18. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  19. Correcting length-frequency distributions for imperfect detection

    Science.gov (United States)

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data

  20. Searching for chaos on low frequency

    OpenAIRE

    Nicolas Wesner

    2004-01-01

    A new method for detecting low dimensional chaos in small sample sets is presented. The method is applied to financial data on low frequency (annual and monthly) for which few observations are available.

  1. A population frequency analysis of the FABP2 gene polymorphism

    African Journals Online (AJOL)

    salah

    DNA was extracted from blood samples for genotype analysis. A PCR-RFLP ... Thr54 genotype. The frequencies of the allele Ala54 and the allele Thr54 of the .... Table 2: Genotype percentages and allele frequencies of FABP2 polymorphism in various ethnic groups. Study Group (n). Genotype %. Allele frequency. P. (vs.

  2. Fixed bin frequency distribution for the VNTR Loci D2S44, D4S139, D5S110, and D8S358 in a population sample from Minas Gerais, Brazil

    Directory of Open Access Journals (Sweden)

    Parreira Kleber Simônio

    2002-01-01

    Full Text Available Fixed bin frequencies for the VNTR loci D2S44, D4S139, D5S110, and D8S358 were determined in a Minas Gerais population sample. The data were generated by RFLP analysis of HaeIII-digested genomic DNA and chemiluminescent detection. The four VNTR loci have met Hardy-Weinberg equilibrium, and there was no association of alleles among VNTR loci. The frequency data can be used in forensic analyses and paternity tests to estimate the frequency of a DNA profile in the general Brazilian population.

  3. Energy drink use frequency among an international sample of people who use drugs: Associations with other substance use and well-being.

    Science.gov (United States)

    Peacock, Amy; Bruno, Raimondo; Ferris, Jason; Winstock, Adam

    2017-05-01

    The study aims were to identify: i.) energy drink (ED), caffeine tablet, and caffeine intranasal spray use amongst a sample who report drug use, and ii.) the association between ED use frequency and demographic profile, drug use, hazardous drinking, and wellbeing. Participants (n=74,864) who reported drug use completed the online 2014 Global Drug Survey. They provided data on demographics, ED use, and alcohol and drug use, completed the Alcohol Use Disorders Identification Test (AUDIT) and Personal Wellbeing Index (PWI), and reported whether they wished to reduce alcohol use. Lifetime ED, caffeine tablet and intranasal caffeine spray use were reported by 69.2%, 24.5% and 4.9%. Median age of ED initiation was 16 years. For those aged 16-37, median years using EDs increased from 4 to 17 years of consumption, where it declined thereafter. Greater ED use frequency was associated with: being male; under 21 years of age; studying; and past year caffeine tablet/intranasal spray, tobacco, cannabis, amphetamine, MDMA, and cocaine use. Past year, infrequent (1-4days) and frequent (≥5days) past month ED consumers reported higher AUDIT scores and lower PWI scores than lifetime abstainers; past month consumers were less likely to report a desire to reduce alcohol use. ED use is part of a complex interplay of drug use, alcohol problems, and poorer personal wellbeing, and ED use frequency may be a flag for current/future problems. Prospective research is required exploring where ED use fits within the trajectory of other alcohol and drug use. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    Science.gov (United States)

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  5. Method and apparatus for radio frequency ceramic sintering

    Science.gov (United States)

    Hoffman, Daniel J.; Kimrey, Jr., Harold D.

    1993-01-01

    Radio frequency energy is used to sinter ceramic materials. A coaxial waveguide resonator produces a TEM mode wave which generates a high field capacitive region in which a sample of the ceramic material is located. Frequency of the power source is kept in the range of radio frequency, and preferably between 60-80 MHz. An alternative embodiment provides a tunable radio frequency circuit which includes a series input capacitor and a parallel capacitor, with the sintered ceramic connected by an inductive lead. This arrangement permits matching of impedance over a wide range of dielectric constants, ceramic volumes, and loss tangents.

  6. Nonlinear Dynamics of Cantilever-Sample Interactions in Atomic Force Microscopy

    Science.gov (United States)

    Cantrell, John H.; Cantrell, Sean A.

    2010-01-01

    The interaction of the cantilever tip of an atomic force microscope (AFM) with the sample surface is obtained by treating the cantilever and sample as independent systems coupled by a nonlinear force acting between the cantilever tip and a volume element of the sample surface. The volume element is subjected to a restoring force from the remainder of the sample that provides dynamical equilibrium for the combined systems. The model accounts for the positions on the cantilever of the cantilever tip, laser probe, and excitation force (if any) via a basis set of set of orthogonal functions that may be generalized to account for arbitrary cantilever shapes. The basis set is extended to include nonlinear cantilever modes. The model leads to a pair of coupled nonlinear differential equations that are solved analytically using a matrix iteration procedure. The effects of oscillatory excitation forces applied either to the cantilever or to the sample surface (or to both) are obtained from the solution set and applied to the to the assessment of phase and amplitude signals generated by various acoustic-atomic force microscope (A-AFM) modalities. The influence of bistable cantilever modes of on AFM signal generation is discussed. The effects on the cantilever-sample surface dynamics of subsurface features embedded in the sample that are perturbed by surface-generated oscillatory excitation forces and carried to the cantilever via wave propagation are accounted by the Bolef-Miller propagating wave model. Expressions pertaining to signal generation and image contrast in A-AFM are obtained and applied to amplitude modulation (intermittent contact) atomic force microscopy and resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM). The influence of phase accumulation in A-AFM on image contrast is discussed, as is the effect of hard contact and maximum nonlinearity regimes of A-AFM operation.

  7. Radar Doppler Processing with Nonuniform Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.

  8. Duobinary pulse shaping for frequency chirp enabled complex modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Khodakarami, Hamid; Shieh, William

    2016-09-01

    The frequency chirp of optical direct modulation (DM) used to be a performance barrier of optical transmission system, because it broadens the signal optical spectrum, which becomes more susceptible to chromatic dispersion induced inter-symbol interference (ISI). However, by considering the chirp as frequency modulation, the single DM simultaneously generates a 2-D signal containing the intensity and phase (namely, the time integral of frequency). This complex modulation concept significantly increases the optical signal to noise ratio (OSNR) sensitivity of DM systems. This Letter studies the duobinary pulse shaping (DB-PS) for chirp enabled DM and its impact on the optical bandwidth and system OSNR sensitivity. DB-PS relieves the bandwidth requirement, at the sacrifice of system OSNR sensitivity. As DB-PS induces a controlled ISI, the receiver requires one more tap for maximum likelihood sequence estimation (MLSE). We verify this modified MLSE with a 10-Gbaud duobinary PAM-4 transmission experiment.

  9. Simple programmable voltage reference for low frequency noise measurements

    Science.gov (United States)

    Ivanov, V. E.; Chye, En Un

    2018-05-01

    The paper presents a circuit design of a low-noise voltage reference based on an electric double-layer capacitor, a microcontroller and a general purpose DAC. A large capacitance value (1F and more) makes it possible to create low-pass filter with a large time constant, effectively reducing low-frequency noise beyond its bandwidth. Choosing the optimum value of the resistor in the RC filter, one can achieve the best ratio between the transient time, the deviation of the output voltage from the set point and the minimum noise cut-off frequency. As experiments have shown, the spectral density of the voltage at a frequency of 1 kHz does not exceed 1.2 nV/√Hz the maximum deviation of the output voltage from the predetermined does not exceed 1.4 % and depends on the holding time of the previous value. Subsequently, this error is reduced to a constant value and can be compensated.

  10. Laser generated ultrasound sources using polymer nanocomposites for high frequency metrology

    KAUST Repository

    Rajagopal, Srinath

    2017-11-22

    Accurate characterisation of ultrasound fields generated by diagnostic and therapeutic transducers is critical for patient safety. This requires hydrophones calibrated to a traceable standard. The existing implementation of the primary standard at the National Measurement Institutes, e.g., NPL and PTB, can provide accurate calibration to a maximum frequency of 40MHz. However, the increasing use of high frequencies for both imaging and therapy necessitates calibrations to frequencies well beyond this range. For this to be possible, a source of high amplitude, broadband, quasi-planar and stable ultrasound fields is required. This is difficult to achieve using conventional piezoelectric sources, but laser generated ultrasound is a promising technique in this regard. In this study various polymer-carbon nanotube nanocomposites (PNC) were fabricated and tested for their suitability for such an application.

  11. Assessment of homogeneity of regions for regional flood frequency analysis

    Science.gov (United States)

    Lee, Jeong Eun; Kim, Nam Won

    2016-04-01

    This paper analyzed the effect of rainfall on hydrological similarity, which is an important step for regional flood frequency analysis (RFFA). For the RFFA, storage function method (SFM) using spatial extension technique was applied for the 22 sub-catchments that are partitioned from Chungju dam watershed in Republic of Korea. We used the SFM to generate the annual maximum floods for 22 sub-catchments using annual maximum storm events (1986~2010) as input data. Then the quantiles of rainfall and flood were estimated using the annual maximum series for the 22 sub-catchments. Finally, spatial variations in terms of two quantiles were analyzed. As a result, there were significant correlation between spatial variations of the two quantiles. This result demonstrates that spatial variation of rainfall is an important factor to explain the homogeneity of regions when applying RFFA. Acknowledgements: This research was supported by a grant (11-TI-C06) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  12. Mutation frequencies in male mice and the estimation of genetic hazards of radiation in men

    International Nuclear Information System (INIS)

    Russell, W.L.; Kelly, E.M.

    1982-01-01

    Estimation of the genetic hazards of ionizing radiation in men is based largely on the frequency of transmitted specific-locus mutations induced in mouse spermatogonial stem cells at low radiation dose rates. The publication of new data on this subject has permitted a fresh review of all the information available. The data continue to show no discrepancy from the interpretation that, although mutation frequency decreases markedly as dose rate is decreased from 90 to 0.8 R/min (1 R = 2.6 x 10/sup -4/ coulombs/kg) there seems to be no further change below 0.8 R/min over the range from that dose rate to 0.0007 R/min. Simple mathematical models are used to compute: (a) a maximum likelihood estimate of the induced mutation frequency at the low dose rates, and (b) a maximum likelihood estimate of the ratio of this to the mutation frequency at high dose rates in the range of 72 to 90 R/min. In the application of these results to the estimation of genetic hazards of radiation in man, the former value can be used to calcualte a doubling dose - i.e., the dose of radiation that induces a mutation frequency equal to the spontaneous frequency. The doubling dose based on the low-dose-rate data compiled here is 110 R. The ratio of the mutation frequency at low dose rate to to that at high dose rate is useful when it becomes necessary to extrapolate from experimental determinations, or from human data, at high dose rates to the expected risk at low dose rates. The ratio derived from the present analysis is 0.33

  13. An Implementation of the Frequency Matching Method

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer

    During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One of these......During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One...... of these methods is the recently proposed Frequency Matching method to compute the maximum a posteriori model of an inverse problem where multiple-point statistics, learned from a training image, is used to formulate a closed form expression for an a priori probability density function. This paper discusses...... aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method...

  14. Rapid Active Power Control of Photovoltaic Systems for Grid Frequency Support

    Energy Technology Data Exchange (ETDEWEB)

    Hoke, Anderson; Shirazi, Mariko; Chakraborty, Sudipta; Muljadi, Eduard; Maksimovic, Dragan

    2017-01-01

    As deployment of power electronic coupled generation such as photovoltaic (PV) systems increases, grid operators have shown increasing interest in calling on inverter-coupled generation to help mitigate frequency contingency events by rapidly surging active power into the grid. When responding to contingency events, the faster the active power is provided, the more effective it may be for arresting the frequency event. This paper proposes a predictive PV inverter control method for very fast and accurate control of active power. This rapid active power control method will increase the effectiveness of various higher-level controls designed to mitigate grid frequency contingency events, including fast power-frequency droop, inertia emulation, and fast frequency response, without the need for energy storage. The rapid active power control method, coupled with a maximum power point estimation method, is implemented in a prototype PV inverter connected to a PV array. The prototype inverter's response to various frequency events is experimentally confirmed to be fast (beginning within 2 line cycles and completing within 4.5 line cycles of a severe test event) and accurate (below 2% steady-state error).

  15. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2009-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author).

  16. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2013-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author).

  17. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2012-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author).

  18. How to take environmental samples for stable isotope analyses

    International Nuclear Information System (INIS)

    Rogers, K.M.

    2009-01-01

    It is possible to analyse a diverse range of samples for environmental investigations. The main types are soil/sediments, vegetation, fauna, shellfish, waste and water. Each type of samples requires different storage and collection methods. Outlined here are the preferred methods of collection to ensure maximum sample integrity and reliability. (author)

  19. High-frequency resistance training is not more effective than low-frequency resistance training in increasing muscle mass and strength in well-trained men.

    Science.gov (United States)

    Gomes, Gederson K; Franco, Cristiane M; Nunes, Paulo Ricardo P; Orsatti, Fábio L

    2018-02-27

    We studied the effects of two different weekly frequency resistance training (RT) protocols over eight weeks on muscle strength and muscle hypertrophy in well-trained men. Twenty-three subjects (age: 26.2±4.2 years; RT experience: 6.9±3.1 years) were randomly allocated into the two groups: low frequency (LFRT, n = 12) or high frequency (HFRT, n = 11). The LFRT performed a split-body routine, training each specific muscle group once a week. The HFRT performed a total-body routine, training all muscle groups every session. Both groups performed the same number of sets (10-15 sets) and exercises (1-2 exercise) per week, 8-12 repetitions maximum (70-80% of 1RM), five times per week. Muscle strength (bench press and squat 1RM) and lean tissue mass (dual-energy x-ray absorptiometry) were assessed prior to and at the end of the study. Results showed that both groups improved (ptrained subjects when the sets and intensity are equated per week.

  20. Frequency of Haemophilus spp. in urinary and and genital tract samples

    Directory of Open Access Journals (Sweden)

    Tatjana Marijan,

    2010-02-01

    Full Text Available Aim To determine the prevalence and antibiotic susceptibility of Haemophilus influenzae and H. parainfluenzae isolated from the urinary and genital tracts. Methods Identification of strains bacteria Haemophilus spp. was carried out by using API NH identifi-cation system, and antibiotic susceptibility was performed by Kirby-Bauer disk diffusion method. Results A total number of 50 (0,03% H. influenzae and 14 (0,01% H. parainfluenzae (out of 180, 415 samples were isolated from genitourinary tract. From urine samples of the girls under 15 years of age these bacteria were isolated in 13 (0,88% and two (0,13% cases, respectively, and only in one case(0,11% of the UTI in boys (H. influenzae. In persons of fertile age, it was only H. influenzae bacteria that was found in urine samples of the five women (0,04% and in three men (0,22%. As a cause of vulvovaginitis, H. influenzae was isolated in four (5,63%, and H. parainfluenzae in two (2,82% girls. In persons of fertile age, H. influenzae was isolated from 10 (0,49% smears of the cervix, and in nine (1,74% male samples. H. parainfluenzae was isolated from seven (1,36% male samples. (p<0.01. Susceptibility testing of H. influenzae and H. parainfluenzae revealed that both pathogens were signifi- cantly resistant to cotrimoxasol only (26.0% and 42.9%, respectively. Conclusion In the etiology of genitourinary infections of girls during childhood, genital infections of women in fertile age (especially in pregnant women, and men with cases of epididimytis and/or orchitis,it is important to think about this rare and demanding bacteria in terms of cultivation.