Joint maximum likelihood estimation of carrier and sampling frequency offsets for OFDM systems
Kim, Y H
2010-01-01
In orthogonal-frequency division multiplexing (OFDM) systems, carrier and sampling frequency offsets (CFO and SFO, respectively) can destroy the orthogonality of the subcarriers and degrade system performance. In the literature, Nguyen-Le, Le-Ngoc, and Ko proposed a simple maximum-likelihood (ML) scheme using two long training symbols for estimating the initial CFO and SFO of a recursive least-squares (RLS) estimation scheme. However, the results of Nguyen-Le's ML estimation show poor performance relative to the Cramer-Rao bound (CRB). In this paper, we extend Moose's CFO estimation algorithm to joint ML estimation of CFO and SFO using two long training symbols. In particular, we derive CRBs for the mean square errors (MSEs) of CFO and SFO estimation. Simulation results show that the proposed ML scheme provides better performance than Nguyen-Le's ML scheme.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Recommended Maximum Temperature For Mars Returned Samples
Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.
2016-01-01
The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.
Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.
Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S
2014-02-18
NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical
Maximum Likelihood Under Response Biased Sampling\\ud
Chambers, Raymond; Dorfman, Alan; Wang, Suojin
2003-01-01
Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...
Maximum-likelihood estimation of haplotype frequencies in nuclear families.
Becker, Tim; Knapp, Michael
2004-07-01
The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.
Maximum entropy, word-frequency, Chinese characters, and multiple meanings.
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.
Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings
Yan, Xiao-Yong
2014-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k_max). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k_max) and consequently also the sha...
Fast Forward Maximum entropy reconstruction of sparsely sampled data.
Balsgart, Nicholas M; Vosegaard, Thomas
2012-10-01
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
Marasek, K; Nowicki, A
1994-01-01
The performance of three spectral techniques (FFT, AR Burg and ARMA) for maximum frequency estimation of the Doppler spectra is described. Different definitions of fmax were used: frequency at which spectral power decreases down to 0.1 of its maximum value, modified threshold crossing method (MTCM) and novel geometrical method. "Goodness" and efficiency of estimators were determined by calculating the bias and the standard deviation of the estimated maximum frequency of the simulated Doppler spectra with known statistics. The power of analysed signals was assumed to have the exponential distribution function. The SNR ratios were changed over the range from 0 to 20 dB. Different spectrum envelopes were generated. A Gaussian envelope approximated narrow band spectral processes (P. W. Doppler) and rectangular spectra were used to simulate a parabolic flow insonified with C. W. Doppler. The simulated signals were generated out of 3072-point records with sampling frequency of 20 kHz. The AR and ARMA models order selections were done independently according to Akaike Information Criterion (AIC) and Singular Value Decomposition (SVD). It was found that the ARMA model, computed according to SVD criterion, had the best overall performance and produced results with the smallest bias and standard deviation. In general AR(SVD) was better than AR(AIC). The geometrical method of fmax estimation was found to be more accurate than other tested methods, especially for narrow band signals.
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
Concerning the maximum frequency limits of Gunn operators
R; F; Macpherson; G; M; Dunn; Ata; Khalid; D; R; S; Cumming
2015-01-01
The length of the transit region of a Gunn diode determines the natural frequency at which it operates in fundamental mode-the shorter the device,the higher the frequency of operation.The long-held view on Gunn diode design is that for a functioning device the minimum length of the transit region is about 1.5μm,limiting the devices to fundamental mode operation at frequencies of roughly 60 GHz.The authors posit that this theoretical restriction is a consequence of limits of the hydrodynamic models by which it was determined.Study of these devices by more advanced Monte Carlo techniques,which simulate the ballistic transport and electron-phonon interactions that govern device behaviour,offers a new lower bound of 0.5μm,which is already being approached by the experimental evidence shown in planar and vertical devices exhibiting Gunn operation at 0.6μm and 0.7μm.It is shown that the limits for Gunn domain operation are determined by the device length required for the transferred electron effect to occur(approximately 0.15μm,which as demonstrated is largely field independent)and the fundamental size of the domain(approximately 0.3μm).At this new length,operation in fundamental mode at much higher frequencies becomes possible-the Monte Carlo model used predicts power output at frequencies over 300 GHz.
A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory
Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul
1999-01-01
Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers.The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminalmovement can optimize cell...
A Rayleigh Doppler frequency estimator derived from maximum likelihood theory
Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul
1999-01-01
Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers. The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminal movement can optimize cell cap...
Securing maximum diversity of Non Pollen Palynomorphs in palynological samples
Enevold, Renée; Odgaard, Bent Vad
2015-01-01
Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important to the interpreta......Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important.......g. Schulz & Shumilovskikh 2013). Increasingly it has become customary for palynologists to quantify at least some of the NPP’s appearing on the pollen slides (e.g. Strother et al. 2015, Odgaard 1994). Are these samples representative of the initial NPP assemblages? The usual sample preparation method...... for pollen analysis is based on acetylization (Erdtman 1969) and HF-treatment which are of variable destructiveness to the NPP’s. Some NPP’s might completely vanish and the prepared sample might hold less NPP diversity than the initial NPP assemblage. Consequently, it may be advisable to consider...
Importance of sampling frequency when collecting diatoms
Wu, Naicheng
2016-11-14
There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.
Importance of sampling frequency when collecting diatoms
Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola
2016-11-01
There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.
A Fast Algorithm for Maximum Likelihood-based Fundamental Frequency Estimation
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2015-01-01
Print Request Permissions Periodic signals are encountered in many applications. Such signals can be modelled by a weighted sum of sinusoidal components whose frequencies are integer multiples of a fundamental frequency. Given a data set, the fundamental frequency can be estimated in many ways...... including a maximum likelihood (ML) approach. Unfortunately, the ML estimator has a very high computational complexity, and the more inaccurate, but faster correlation-based estimators are therefore often used instead. In this paper, we propose a fast algorithm for the evaluation of the ML cost function...... for complex-valued data over all frequencies on a Fourier grid and up to a maximum model order. The proposed algorithm significantly reduces the computational complexity to a level not far from the complexity of the popular harmonic summation method which is an approximate ML estimator....
Measuring of the maximum measurable velocity for dual-frequency laser interferometer
Zhiping Zhang; Zhaogu Cheng; Zhaoyu Qin; Jianqiang Zhu
2007-01-01
There is an increasing demand on the measurable velocity of laser interferometer in manufacturing technologies. The maximum measurable velocity is limited by frequency difference of laser source, optical configuration, and electronics bandwidth. An experimental setup based on free falling movement has been demonstrated to measure the maximum easurable velocity for interferometers. Measurement results show that the maximum measurable velocity is less than its theoretical value. Moreover, the effect of kinds of factors upon the measurement results is analyzed, and the results can offer a reference for industrial applications.
Kirkegaard, Poul Henning; Nielsen, Søren R.K.; Micaletti, R. C.;
This paper considers estimation of the Maximum Damage Indicator (MSDI) by using time-frequency system identification techniques for an RC-structure subjected to earthquake excitation. The MSDI relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfre...
Scaling of wingbeat frequency with body mass in bats and limits to maximum bat size.
Norberg, Ulla M Lindhe; Norberg, R Åke
2012-03-01
The ability to fly opens up ecological opportunities but flight mechanics and muscle energetics impose constraints, one of which is that the maximum body size must be kept below a rather low limit. The muscle power available for flight increases in proportion to flight muscle mass and wingbeat frequency. The maximum wingbeat frequency attainable among increasingly large animals decreases faster than the minimum frequency required, so eventually they coincide, thereby defining the maximum body mass at which the available power just matches up to the power required for sustained aerobic flight. Here, we report new wingbeat frequency data for 27 morphologically diverse bat species representing nine families, and additional data from the literature for another 38 species, together spanning a range from 2.0 to 870 g. For these species, wingbeat frequency decreases with increasing body mass as M(b)(-0.26). We filmed 25 of our 27 species in free flight outdoors, and for these the wingbeat frequency varies as M(b)(-0.30). These exponents are strikingly similar to the body mass dependency M(b)(-0.27) among birds, but the wingbeat frequency is higher in birds than in bats for any given body mass. The downstroke muscle mass is also a larger proportion of the body mass in birds. We applied these empirically based scaling functions for wingbeat frequency in bats to biomechanical theories about how the power required for flight and the power available converge as animal size increases. To this end we estimated the muscle mass-specific power required for the largest flying extant bird (12-16 kg) and assumed that the largest potential bat would exert similar muscle mass-specific power. Given the observed scaling of wingbeat frequency and the proportion of the body mass that is made up by flight muscles in birds and bats, we estimated the maximum potential body mass for bats to be 1.1-2.3 kg. The largest bats, extinct or extant, weigh 1.6 kg. This is within the range expected if it
Bajkova, Anisa T
2011-01-01
We propose the multi-frequency synthesis (MFS) algorithm with spectral correction of frequency-dependent source brightness distribution based on maximum entropy method. In order to take into account the spectral terms of n-th order in the Taylor expansion for the frequency-dependent brightness distribution, we use a generalized form of the maximum entropy method suitable for reconstruction of not only positive-definite functions, but also sign-variable ones. The proposed algorithm is aimed at producing both improved total intensity image and two-dimensional spectral index distribution over the source. We consider also the problem of frequency-dependent variation of the radio core positions of self-absorbed active galactic nuclei, which should be taken into account in a correct multi-frequency synthesis. First, the proposed MFS algorithm has been tested on simulated data and then applied to four-frequency synthesis imaging of the radio source 0954+658 from VLBA observational data obtained quasi-simultaneously ...
无
2009-01-01
In order to restrain the mid-spatial frequency error in magnetorheological finishing (MRF) process, a novel part-random path is designed based on the theory of maximum entropy method (MEM). Using KDMRF-1000F polishing machine, one flat work piece (98 mm in diameter) is polished. The mid-spatial frequency error in the region using part-random path is much lower than that by using common raster path. After one MRF iteration (7.46 min), peak-to-valley (PV) is 0.062 wave (1 wave =632.8 nm), root-mean-square (RMS) is 0.010 wave and no obvious mid-spatial frequency error is found. The result shows that the part-random path is a novel path, which results in a high form accuracy and low mid-spatial frequency error in MRF process.
Regional Frequency Analysis of Annual Maximum Rainfall in Monsoon Region of Pakistan using L-moments
Amina Shahzadi; Ahmad Saeed Akhter; Betul Saf
2013-01-01
The estimation of magnitude and frequency of extreme rainfall has immense importance to make decisions about hydraulic structures like spillways, dikes and dams etc The main objective of this study is to get the best fit distributions for annual maximum rainfall data on regional basis in order to estimate the extreme rainfall events (quantiles) for various return periods. This study is carried out using index flood method using L-moments by Hosking and wallis (1997). The study is based on 23 ...
Reymbaut, A.; Gagnon, A.-M.; Bergeron, D.; Tremblay, A.-M. S.
2017-03-01
The computation of transport coefficients, even in linear response, is a major challenge for theoretical methods that rely on analytic continuation of correlation functions obtained numerically in Matsubara space. While maximum entropy methods can be used for certain correlation functions, this is not possible in general, important examples being the Seebeck, Hall, Nernst, and Reggi-Leduc coefficients. Indeed, positivity of the spectral weight on the positive real-frequency axis is not guaranteed in these cases. The spectral weight can even be complex in the presence of broken time-reversal symmetry. Various workarounds, such as the neglect of vertex corrections or the study of the infinite frequency or Kelvin limits, have been proposed. Here, we show that one can define auxiliary response functions that allow one to extract the desired real-frequency susceptibilities from maximum entropy methods in the most general multiorbital cases with no particular symmetry. As a benchmark case, we study the longitudinal thermoelectric response and corresponding Onsager coefficient in the single-band two-dimensional Hubbard model treated with dynamical mean-field theory and continuous-time quantum Monte Carlo. We thereby extend the maximum entropy analytic continuation with auxiliary functions (MaxEntAux method), developed for the study of the superconducting pairing dynamics of correlated materials, to transport coefficients.
Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters
Aguglia, D
2014-01-01
This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...
Transmission through Ferrite Samples at Submillimeter Frequencies
1986-05-01
y Y In all the equations given above, c and 1’e are, in general, complex. Measurements are generally made on the power transmitted which is ITaI ? in...Frequency (cm-’) using measurements of2T 1 given eby Figure 8. Power transmission coeffi- equation( 2),Tp1 = ITaI ~cient for 100-m-thick ferrite slab
High-frequency maximum observable shaking map of Italy from fault sources
Zonno, Gaetano
2012-03-17
We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.
Thomas, Catherine [Paris-11 Univ., 91 Orsay (France)
2000-01-19
Theoretical models have shown that the maximum magnetic field in radio frequency superconducting cavities is the superheating field H{sub sh}. For niobium, H{sub sh} is 25 - 30% higher than the thermodynamical H{sub c} field: H{sub sh} within (240 - 274) mT. However, the maximum magnetic field observed so far is in the range H{sub c,max} = 152 mT for the best 1.3 GHz Nb cavities. This field is lower than the critical field H{sub c1} above which the superconductor breaks up into divided normal and superconducting zones (H{sub c1}{<=}H{sub c}). Thermal instabilities are responsible for this low value. In order to reach H{sub sh} before thermal breakdown, high power short pulses are used. The cavity needs then to be strongly over-coupled. The dedicated test bed has been built from the collaboration between Istituto Nazionale di Fisica Nucleare (INFN) - Sezione di Genoa, and the Service d'Etudes et Realisation d'Accelerateurs (SERA) of Laboratoire de l'Accelerateur Lineaire (LAL). The maximum magnetic field, H{sub rf,max}, measurements on INFN cavities give lower results than the theoretical speculations and are in agreement with previous results. The superheating magnetic fields is linked to the magnetic penetration depth. This superconducting characteristic length can be used to determine the quality of niobium through the ratio between the resistivity measured at 300 K and 4.2 K in the normal conducting state (RRR). Results have been compared to previous ones and agree pretty well. They show that the RRR measured on cavities is superficial and lower than the RRR measured on samples which concerns the volume. (author)
7 CFR 58.643 - Frequency of sampling.
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Frequency of sampling. 58.643 Section 58.643 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.643 Frequency of sampling. (a) Microbiological. Representative samples shall be taken...
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
A software sampling frequency adaptive algorithm for reducing spectral leakage
PAN Li-dong; WANG Fei
2006-01-01
Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.
Sampling frequency affects ActiGraph activity counts
Brønd, Jan Christian; Arvidsson, Daniel
in Matlab and sampled at frequencies of 30-100 Hz. Also, acceleration signals during indoor walking and running were sampled at 30 Hz using the ActiGraph GT3X and resampled in Matlab to frequencies of 40-100 Hz. All data was processed with the ActiLife software.Results: Acceleration frequencies between 5....... The difference increased with increasing activity intensity, with up to 1000 counts per minute at fast running.Discussion & conclusions: Activity counts from vigorous physical activity is highly attenuated with the ActiLife software. High frequency movement and noise information escape the bandpass filter...
Jones, R.B.; Cogan, J.D. Jr.
1991-09-19
An investigation was done to determine the maximum credible event value for samples of explosives and disassembled components up to 1.2 g when stored in conductive plastic vials as packaged and handled, stored, or transported at Mound. The test was performed at Test Firing, with photographs taken before and after the test. The standard propagation test setup was used; a vial containing 1.2 g of PETN (pentaerythritol tetranitrate) was surrounded by other like vials containing 1.2-g samples of PETN. The 1.2-g PETN pellet was then ignited by an EX-12 detonator. The test showed that there was no propagation and that the maximum credible event value for the handling tray is 1.2 g. The test also showed that when the tray is placed in a metal container the MCE value will still be 1.2 g. 9 figs.
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent
Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?
McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li
2014-05-01
This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.
Verdon-Kidd, D. C.; Kiem, A. S.
2015-12-01
Rainfall intensity-frequency-duration (IFD) relationships are commonly required for the design and planning of water supply and management systems around the world. Currently, IFD information is based on the "stationary climate assumption" that weather at any point in time will vary randomly and that the underlying climate statistics (including both averages and extremes) will remain constant irrespective of the period of record. However, the validity of this assumption has been questioned over the last 15 years, particularly in Australia, following an improved understanding of the significant impact of climate variability and change occurring on interannual to multidecadal timescales. This paper provides evidence of regime shifts in annual maximum rainfall time series (between 1913-2010) using 96 daily rainfall stations and 66 sub-daily rainfall stations across Australia. Furthermore, the effect of these regime shifts on the resulting IFD estimates are explored for three long-term (1913-2010) sub-daily rainfall records (Brisbane, Sydney, and Melbourne) utilizing insights into multidecadal climate variability. It is demonstrated that IFD relationships may under- or over-estimate the design rainfall depending on the length and time period spanned by the rainfall data used to develop the IFD information. It is recommended that regime shifts in annual maximum rainfall be explicitly considered and appropriately treated in the ongoing revisions of the Engineers Australia guide to estimating and utilizing IFD information, Australian Rainfall and Runoff (ARR), and that clear guidance needs to be provided on how to deal with the issue of regime shifts in extreme events (irrespective of whether this is due to natural or anthropogenic climate change). The findings of our study also have important implications for other regions of the world that exhibit considerable hydroclimatic variability and where IFD information is based on relatively short data sets.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Effect of off-frequency sampling in magnetic resonance elastography.
Johnson, Curtis L; Chen, Danchin D; Olivero, William C; Sutton, Bradley P; Georgiadis, John G
2012-02-01
In magnetic resonance elastography (MRE), shear waves at a certain frequency are encoded through bipolar gradients that switch polarity at a controlled encoding frequency and are offset in time to capture wave propagation using a controlled sampling frequency. In brain MRE, there is a possibility that the mechanical actuation frequency is different from the vibration frequency, leading to a mismatch with encoding and sampling frequencies. This mismatch can occur in brain MRE from causes both extrinsic and intrinsic to the brain, such as scanner bed vibrations or active damping in the head. The purpose of this work was to investigate how frequency mismatch can affect MRE shear stiffness measurements. Experiments were performed on a dual-medium agarose gel phantom, and the results were compared with numerical simulations to quantify these effects. It is known that off-frequency encoding alone results in a scaling of wave amplitude, and it is shown here that off-frequency sampling can result in two main effects: (1) errors in the overall shear stiffness estimate of the material on the global scale and (2) local variations appearing as stiffer and softer structures in the material. For small differences in frequency, it was found that measured global stiffness of the brain could theoretically vary by up to 12.5% relative to actual stiffness with local variations of up to 3.7% of the mean stiffness. It was demonstrated that performing MRE experiments at a frequency other than that of tissue vibration can lead to artifacts in the MRE stiffness images, and this mismatch could explain some of the large-scale scatter of stiffness data or lack of repeatability reported in the brain MRE literature.
Yamanaka, Kota; Hirata, Shinnosuke; Hachiya, Hiroyuki
2016-07-01
Ultrasonic distance measurement for obstacles has been recently applied in automobiles. The pulse-echo method based on the transmission of an ultrasonic pulse and time-of-flight (TOF) determination of the reflected echo is one of the typical methods of ultrasonic distance measurement. Improvement of the signal-to-noise ratio (SNR) of the echo and the avoidance of crosstalk between ultrasonic sensors in the pulse-echo method are required in automotive measurement. The SNR of the reflected echo and the resolution of the TOF are improved by the employment of pulse compression using a maximum-length sequence (M-sequence), which is one of the binary pseudorandom sequences generated from a linear feedback shift register (LFSR). Crosstalk is avoided by using transmitted signals coded by different M-sequences generated from different LFSRs. In the case of lower-order M-sequences, however, the number of measurement channels corresponding to the pattern of the LFSR is not enough. In this paper, pulse compression using linear-frequency-modulated (LFM) signals coded by M-sequences has been proposed. The coding of LFM signals by the same M-sequence can produce different transmitted signals and increase the number of measurement channels. In the proposed method, however, the truncation noise in autocorrelation functions and the interference noise in cross-correlation functions degrade the SNRs of received echoes. Therefore, autocorrelation properties and cross-correlation properties in all patterns of combinations of coded LFM signals are evaluated.
Regional Frequency Analysis of Annual Maximum Rainfall in Monsoon Region of Pakistan using L-moments
Amina Shahzadi
2013-02-01
Full Text Available The estimation of magnitude and frequency of extreme rainfall has immense importance to make decisions about hydraulic structures like spillways, dikes and dams etc The main objective of this study is to get the best fit distributions for annual maximum rainfall data on regional basis in order to estimate the extreme rainfall events (quantiles for various return periods. This study is carried out using index flood method using L-moments by Hosking and wallis (1997. The study is based on 23 sites of rainfall which are divided into three homogeneous regions. The collective results of L-moment ratio diagram, Z-statistic and AWD values show the GLO, GEV and GNO to be best fit for all three regions and in addition PE3 for region 3. On the basis of relative RMSE, for region 1 and region 2, GLO, GEV and GNO are producing approximately the same relative RMSE for return periods upto 100. While GNO is producing less relative RMSE for large return periods of 500 and 1000. So for large return periods GNO could be best distribution. For region 3 GLO, GEV, GNO and PE3 are having approximately the same relative RMSE for return periods upto 100. While for large return periods of 500 and 1000 PE3 could be best on basis of less relative RMSE.
Efficient estimation for ergodic diffusions sampled at high frequency
Sørensen, Michael
A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...
Measurement of carbonaceous aerosol with different sampling configurations and frequencies
Y. Cheng
2015-03-01
Full Text Available Carbonaceous aerosol in Beijing, China was measured with different sampling configurations (denuded vs. un-denuded and frequencies (24 vs. 48 h averaged. Our results suggest that the negative sampling artifact of a bare quartz filter could be remarkably enhanced due to the uptake of water vapor by the filter medium, indicating that the positive sampling artifact tends to be underestimated under high humidity conditions. It was also observed that the analytical artifact (i.e., the underestimation of elemental carbon by the operationally defined value of the thermal-optical method was more apparent for the low frequency samples such that their elemental carbon (EC concentrations were about 15% lower than the reference values measured by the high-frequency, denuded filters. Moreover, EC results of the low frequency samples were found to exhibit a stronger dependence on the charring correction method. In addition, optical attenuation (ATN of EC was retrieved from the carbon analyzer, and the low frequency samples were shown to be more significantly biased by the shadowing effect.
Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks
Paul H. Evangelista
2011-05-01
Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Dental anthropology of a Brazilian sample: Frequency of nonmetric traits.
Tinoco, Rachel Lima Ribeiro; Lima, Laíse Nascimento Correia; Delwing, Fábio; Francesquini, Luiz; Daruge, Eduardo
2016-01-01
Dental elements are valuable tools in a study of ancient populations and species, and key-features for human identification; among the dental anthropology field, nonmetric traits, standardized by ASUDAS, are closely related to ancestry. This study aimed to analyze the frequency of six nonmetric traits in a sample from Southeast Brazil, composed by 130 dental casts from individuals aged between 18 and 30, without foreign parents or grandparents. A single examiner observed the presence or absence of shoveling, Carabelli's cusp, fifth cusp, 3-cusped UM2, sixth cusp, and 4-cusped LM2. The frequencies obtained were different from the ones shown by other researches to Amerindian and South American samples, and related to European and sub-Saharan frequencies, showing the influence of this groups in the current Brazilian population. Sexual dimorphism was found in the frequencies of Carabelli's cusp, 3-cusped UM2, and sixth cusp.
Low sampling frequency processing for ultra-wideband signals
Wan, Yonglun; Si, Qiang; Lu, Youxin; Wang, Hong; Wang, Xuegang
2005-11-01
Ultra-wideband (UWB) signals are widely used in radar, navigation and satellite communications. It is rather difficult to process UWB signals. In this paper we adopt dechirp pulse compression method to process the received UWB linear frequency modulated (LFM) signals. UWB signals are converted into signals with frequency components that are proportional to the relative range between the target and the reference target. It means to select low-speed analog-to-digital converters (ADC) for sampling UWB signals. The simulation results show that LFM signal with 600MHz center frequency, 200MHz bandwidth and 30usec pulse width can be processed under 70MHz sampling frequency by means of the method.
Gutenberg-Richter b-value maximum likelihood estimation and sample size
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2017-01-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Optimal sampling frequency in recording of resistance training exercises.
Bardella, Paolo; Carrasquilla García, Irene; Pozzo, Marco; Tous-Fajardo, Julio; Saez de Villareal, Eduardo; Suarez-Arrones, Luis
2017-03-01
The purpose of this study was to analyse the raw lifting speed collected during four different resistance training exercises to assess the optimal sampling frequency. Eight physically active participants performed sets of Squat Jumps, Countermovement Jumps, Squats and Bench Presses at a maximal lifting speed. A linear encoder was used to measure the instantaneous speed at a 200 Hz sampling rate. Subsequently, the power spectrum of the signal was computed by evaluating its Discrete Fourier Transform. The sampling frequency needed to reconstruct the signals with an error of less than 0.1% was f99.9 = 11.615 ± 2.680 Hz for the exercise exhibiting the largest bandwidth, with the absolute highest individual value being 17.467 Hz. There was no difference between sets in any of the exercises. Using the closest integer sampling frequency value (25 Hz) yielded a reconstruction of the signal up to 99.975 ± 0.025% of its total in the worst case. In conclusion, a sampling rate of 25 Hz or above is more than adequate to record raw speed data and compute power during resistance training exercises, even under the most extreme circumstances during explosive exercises. Higher sampling frequencies provide no increase in the recording precision and may instead have adverse effects on the overall data quality.
Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?
McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan
2016-07-01
Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/.
Wang, Dong; Lu, Kaiyuan; Rasmussen, Peter Omand
2015-01-01
The conventional high frequency signal injection method is to superimpose a high frequency voltage signal to the commanded stator voltage before space vector modulation. Therefore, the magnitude of the voltage used for machine torque production is limited. In this paper, a new high frequency...... injection method, in which high frequency signal is generated by shifting the duty cycle between two neighboring switching periods, is proposed. This method allows injecting a high frequency signal at half of the switching frequency without the necessity to sacrifice the machine fundamental voltage...... amplitude. This may be utilized to develop new position estimation algorithm without involving the inductance in the medium to high speed range. As an application example, a developed inductance independent position estimation algorithm using the proposed high frequency injection method is applied to drive...
Frequency of lucid dreaming in a representative German sample.
Schredl, Michael; Erlacher, Daniel
2011-02-01
Lucid dreams occur when a person is aware that he is dreaming while he is dreaming. In a representative sample of German adults (N = 919), 51% of the participants reported that they had experienced a lucid dream at least once. Lucid dream recall was significantly higher in women and negatively correlated with age. However, these effects might be explained by the frequency of dream recall, as there was a correlation of .57 between frequency of dream recall and frequency of lucid dreams. Other sociodemographic variables like education, marital status, or monthly income were not related to lucid dream frequency. Given the relatively high prevalence of lucid dreaming reported in the present study, research on lucid dreams might be pursued in the sleep laboratory to expand the knowledge about sleep, dreaming, and consciousness processes in general.
A Frequency Domain Design Method For Sampled-Data Compensators
Niemann, Hans Henrik; Jannerup, Ole Erik
1990-01-01
A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...
New sample cell configuration for wide-frequency dielectric spectroscopy: DC to radio frequencies.
Nakanishi, Masahiro; Sasaki, Yasutaka; Nozaki, Ryusuke
2010-12-01
A new configuration for the sample cell to be used in broadband dielectric spectroscopy is presented. A coaxial structure with a parallel plate capacitor (outward parallel plate cell: OPPC) has made it possible to extend the frequency range significantly in comparison with the frequency range of the conventional configuration. In the proposed configuration, stray inductance is significantly decreased; consequently, the upper bound of the frequency range is improved by two orders of magnitude from the upper limit of conventional parallel plate capacitor (1 MHz). Furthermore, the value of capacitance is kept high by using a parallel plate configuration. Therefore, the precision of the capacitance measurement in the lower frequency range remains sufficiently high. Finally, OPPC can cover a wide frequency range (100 Hz-1 GHz) with an appropriate admittance measuring apparatus such as an impedance or network analyzer. The OPPC and the conventional dielectric cell are compared by examining the frequency dependence of the complex permittivity for several polar liquids and polymeric films.
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
Chemmangat Manakkal Cheriya, Krishnan; Ferranti, Francesco; Dhaene, Tom; Knockaert, Luc
2014-01-01
An enhanced parametric macromodelling scheme is presented for linear high-frequency systems based on the use of multiple frequency scaling coefficients and a sequential sampling algorithm to fully automate the entire modelling process. The proposed method is applied on a ring resonator bandpass filter example and compared with another state-of-the-art macromodelling method to show its improved modelling capability and reduced setup time.
Switched reluctance machines control with a minimized sampling frequency
Rain, Xavier; Hilairet, Mickaël; Arias Pujol, Antoni
2014-01-01
This paper is focused on reducing the Switched Reluctance Machines (SRMs) control sampling frequency in order to save processor real time resources, while keeping the stability and also the performance, in terms of average torque and torque ripple. Reducing the CPU cost either by implementing the control algorithm in a less performing CPU or more importantly reducing the percentage of the CPU demand is an attractive goal, especially for the electrical vehicle industry from where the SRM used ...
Espino, Susana; Schenk, H Jochen
2011-01-01
The maximum specific hydraulic conductivity (k(max)) of a plant sample is a measure of the ability of a plants' vascular system to transport water and dissolved nutrients under optimum conditions. Precise measurements of k(max) are needed in comparative studies of hydraulic conductivity, as well as for measuring the formation and repair of xylem embolisms. Unstable measurements of k(max) are a common problem when measuring woody plant samples and it is commonly observed that k(max) declines from initially high values, especially when positive water pressure is used to flush out embolisms. This study was designed to test five hypotheses that could potentially explain declines in k(max) under positive pressure: (i) non-steady-state flow; (ii) swelling of pectin hydrogels in inter-vessel pit membranes; (iii) nucleation and coalescence of bubbles at constrictions in the xylem; (iv) physiological wounding responses; and (v) passive wounding responses, such as clogging of the xylem by debris. Prehydrated woody stems from Laurus nobilis (Lauraceae) and Encelia farinosa (Asteraceae) collected from plants grown in the Fullerton Arboretum in Southern California, were used to test these hypotheses using a xylem embolism meter (XYL'EM). Treatments included simultaneous measurements of stem inflow and outflow, enzyme inhibitors, stem-debarking, low water temperatures, different water degassing techniques, and varied concentrations of calcium, potassium, magnesium, and copper salts in aqueous measurement solutions. Stable measurements of k(max) were observed at concentrations of calcium, potassium, and magnesium salts high enough to suppress bubble coalescence, as well as with deionized water that was degassed using a membrane contactor under strong vacuum. Bubble formation and coalescence under positive pressure in the xylem therefore appear to be the main cause for declining k(max) values. Our findings suggest that degassing of water is essential for achieving stable and
Romero, Claudia; Mesa, Duvan
2015-04-01
L-Moments Regional Frequency Analysis Methodology Application in maximum rainfall values over the Bogota River's basin 1°Claudia Patricia Romero Hernández; 2°Duvan Javier Mesa Fernández Universidad Santo Tomas; Colombia The application area of this methodology is the Bogota River's basin, which is located in Cundinamarca; a Colombian department with a total surface area of 589.143 hectares. This basin includes 19 sub-basins, and it is the most densely urbanized of the country. Including its metropolitan area, this region boasts a population of 9.000.000 inhabitants; which composes approximately 23% of Colombia's population and possesses around 19% of the country's industries. This basin has shown a notorious increase of complicated floods frequency in the last few years due to climatic variations. These climatic periods correspond to a weather pattern called Niña Phenomenon (2010-2011), which affected 57.000 citizens in this department and 4,900 people directly in Bogota city, with an estimated economic damage of 277'121,052 USD. The Regional Frequency Analysis methodology is a statistics procedure that consists in adding information from multiple samples in a single large sample, assuming previously that all of these come from the same probability model, except for a difference between them due to a scale factor. These samples are defined by a "regionalization" procedure known as the "Avenue Index" or "Flood Index". This procedure groups several kinds of information that comes from a common probability model, such as temperature, rainfall, and water flow. This model must be similar for all of the weather stations located in a homogeneous region. Maps for each of 4 return periods (5, 10, 50 and 100 years) were developed based on 120 weather stations located on this basin. The information used in this process comes from median monthly rainfall data, based on historical series between 30 and 40 years average. An increase in the annual median rainfall was
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2017-04-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2016-11-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Mixed Frequency Data Sampling Regression Models: The R Package midasr
Eric Ghysels
2016-08-01
Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.
Comparison of metatranscriptomic samples based on k-tuple frequencies.
Ying Wang
Full Text Available BACKGROUND: The comparison of samples, or beta diversity, is one of the essential problems in ecological studies. Next generation sequencing (NGS technologies make it possible to obtain large amounts of metagenomic and metatranscriptomic short read sequences across many microbial communities. De novo assembly of the short reads can be especially challenging because the number of genomes and their sequences are generally unknown and the coverage of each genome can be very low, where the traditional alignment-based sequence comparison methods cannot be used. Alignment-free approaches based on k-tuple frequencies, on the other hand, have yielded promising results for the comparison of metagenomic samples. However, it is not known if these approaches can be used for the comparison of metatranscriptome datasets and which dissimilarity measures perform the best. RESULTS: We applied several beta diversity measures based on k-tuple frequencies to real metatranscriptomic datasets from pyrosequencing 454 and Illumina sequencing platforms to evaluate their effectiveness for the clustering of metatranscriptomic samples, including three d2-type dissimilarity measures, one dissimilarity measure in CVTree, one relative entropy based measure S2 and three classical 1p-norm distances. Results showed that the measure d2(S can achieve superior performance on clustering metatranscriptomic samples into different groups under different sequencing depths for both 454 and Illumina datasets, recovering environmental gradients affecting microbial samples, classifying coexisting metagenomic and metatranscriptomic datasets, and being robust to sequencing errors. We also investigated the effects of tuple size and order of the background Markov model. A software pipeline to implement all the steps of analysis is built and is available at http://code.google.com/p/d2-tools/. CONCLUSIONS: The k-tuple based sequence signature measures can effectively reveal major groups and
Maris, E.
1998-01-01
The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…
Fast maximum likelihood estimate of the Kriging correlation range in the frequency domain
De Baar, J.H.S.; Dwight, R.P.; Bijl, H.
2011-01-01
We apply Ordinary Kriging to predict 75,000 terrain survey data from a randomly sampled subset of < 2500 observations. Since such a Kriging prediction requires a considerable amount of CPU time, we aim to reduce its computational cost. In a conventional approach, the cost of the Kriging analysis wou
Fast maximum likelihood estimate of the Kriging correlation range in the frequency domain
De Baar, J.H.S.; Dwight, R.P.; Bijl, H.
2011-01-01
We apply Ordinary Kriging to predict 75,000 terrain survey data from a randomly sampled subset of < 2500 observations. Since such a Kriging prediction requires a considerable amount of CPU time, we aim to reduce its computational cost. In a conventional approach, the cost of the Kriging analysis
Evangelia Karagianni
2016-04-01
Full Text Available By utilizing meteorological data such as relative humidity, temperature, pressure, rain rate and precipitation duration at eight (8 stations in Aegean Archipelagos from six recent years (2007 – 2012, the effect of the weather on Electromagnetic wave propagation is studied. The EM wave propagation characteristics depend on atmospheric refractivity and consequently on Rain-Rate which vary in time and space randomly. Therefore the statistics of radio refractivity, Rain-Rate and related propagation effects are of main interest. This work investigates the maximum value of rain rate in monthly rainfall records, for a 5 min interval comparing it with different values of integration time as well as different percentages of time. The main goal is to determine the attenuation level for microwave links based on local rainfall data for various sites in Greece (L-zone, namely Aegean Archipelagos, with a view on improved accuracy as compared with more generic zone data available. A measurement of rain attenuation for a link in the S-band has been carried out and the data compared with prediction based on the standard ITU-R method.
Measurement of carbonaceous aerosol with different sampling configurations and frequencies
Cheng, Y.; He, K.-B.
2015-07-01
A common approach for measuring the mass of organic carbon (OC) and elemental carbon (EC) in airborne particulate matter involves collection on a quartz fiber filter and subsequent thermal-optical analysis. Although having been widely used in aerosol studies and in PM2.5 (fine particulate matter) chemical speciation monitoring networks in particular, this measurement approach is prone to several types of artifacts, such as the positive sampling artifact caused by the adsorption of gaseous organic compounds onto the quartz filter, the negative sampling artifact due to the evaporation of OC from the collected particles and the analytical artifact in the thermal-optical determination of OC and EC (which is strongly associated with the transformation of OC into char OC and typically results in an underestimation of EC). The presence of these artifacts introduces substantial uncertainties to observational data on OC and EC and consequently limits our ability to evaluate OC and EC estimations in air quality models. In this study, the influence of sampling frequency on the measurement of OC and EC was investigated based on PM2.5 samples collected in Beijing, China. Our results suggest that the negative sampling artifact of a bare quartz filter could be remarkably enhanced due to the uptake of water vapor by the filter medium. We also demonstrate that increasing sampling duration does not necessarily reduce the impact of positive sampling artifact, although it will enhance the analytical artifact. Due to the effect of the analytical artifact, EC concentrations of 48 h averaged samples were about 15 % lower than results from 24 h averaged ones. In addition, it was found that with the increase of sampling duration, EC results exhibited a stronger dependence on the charring correction method and, meanwhile, optical attenuation (ATN) of EC (retrieved from the carbon analyzer) was more significantly biased by the shadowing effect. Results from this study will be useful for the
Kaiyu Wang
2014-01-01
Full Text Available This paper presents an efficient all digital carrier recovery loop (ADCRL for quadrature phase shift keying (QPSK. The ADCRL combines classic closed-loop carrier recovery circuit, all digital Costas loop (ADCOL, with frequency feedward loop, maximum likelihood frequency estimator (MLFE so as to make the best use of the advantages of the two types of carrier recovery loops and obtain a more robust performance in the procedure of carrier recovery. Besides, considering that, for MLFE, the accurate estimation of frequency offset is associated with the linear characteristic of its frequency discriminator (FD, the Coordinate Rotation Digital Computer (CORDIC algorithm is introduced into the FD based on MLFE to unwrap linearly phase difference. The frequency offset contained within the phase difference unwrapped is estimated by the MLFE implemented just using some shifter and multiply-accumulate units to assist the ADCOL to lock quickly and precisely. The joint simulation results of ModelSim and MATLAB show that the performances of the proposed ADCRL in locked-in time and range are superior to those of the ADCOL. On the other hand, a systematic design procedure based on FPGA for the proposed ADCRL is also presented.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...
Disentangling seasonal bacterioplankton population dynamics by high-frequency sampling.
Lindh, Markus V; Sjöstedt, Johanna; Andersson, Anders F; Baltar, Federico; Hugerth, Luisa W; Lundin, Daniel; Muthusamy, Saraladevi; Legrand, Catherine; Pinhassi, Jarone
2015-07-01
Multiyear comparisons of bacterioplankton succession reveal that environmental conditions drive community shifts with repeatable patterns between years. However, corresponding insight into bacterioplankton dynamics at a temporal resolution relevant for detailed examination of variation and characteristics of specific populations within years is essentially lacking. During 1 year, we collected 46 samples in the Baltic Sea for assessing bacterial community composition by 16S rRNA gene pyrosequencing (nearly twice weekly during productive season). Beta-diversity analysis showed distinct clustering of samples, attributable to seemingly synchronous temporal transitions among populations (populations defined by 97% 16S rRNA gene sequence identity). A wide spectrum of bacterioplankton dynamics was evident, where divergent temporal patterns resulted both from pronounced differences in relative abundance and presence/absence of populations. Rates of change in relative abundance calculated for individual populations ranged from 0.23 to 1.79 day(-1) . Populations that were persistently dominant, transiently abundant or generally rare were found in several major bacterial groups, implying evolution has favoured a similar variety of life strategies within these groups. These findings suggest that high temporal resolution sampling allows constraining the timescales and frequencies at which distinct populations transition between being abundant or rare, thus potentially providing clues about physical, chemical or biological forcing on bacterioplankton community structure.
Oudyn, Frederik W; Lyons, David J; Pringle, M J
2012-01-01
Many scientific laboratories follow, as standard practice, a relatively short maximum holding time (within 7 days) for the analysis of total suspended solids (TSS) in environmental water samples. In this study we have subsampled from bulk water samples stored at ∼4 °C in the dark, then analysed for TSS at time intervals up to 105 days after collection. The nonsignificant differences in TSS results observed over time demonstrates that storage at ∼4 °C in the dark is an effective method of preserving samples for TSS analysis, far past the 7-day standard practice. Extending the maximum holding time will ease the pressure on sample collectors and laboratory staff who until now have had to determine TSS within an impractically short period.
A frequency based constraint for a multi-frequency linear sampling method
Alqadah, H. F.; Valdivia, N.
2013-09-01
The linear sampling method (LSM) has become a well established non-iterative technique for a variety of inverse scattering problems. The method offers a number of advantages over competing inverse scattering methods, mainly it is based on solving a linear problem while being able to account for multi-path effects. Unfortunately under the current framework the method is only effective when using a large number of multi-static data, and therefore may be impractical for many imaging applications. While primarily developed under a single frequency framework, recently the extension of the method to multi-banded data sets has been considered. It is known in general that the availability of multi-frequency data should compensate for reduced spatial diversity, but it is not clear how this can be accomplished for the LSM. In this work we take a step in this direction by considering a frequency based partial variation approach. We first establish that on bands absent of any corresponding Dirichlet eigenvalues the Herglotz density exhibits bounded variation. We then consider a regularization method incorporating this prior knowledge. The proposed approach exhibited a good estimate of the unknown Dirichlet eigenvalues of the obstacle in question when using reduced data. This observation also correlated with higher quality 3D reconstructions.
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
MalHaploFreq: A computer programme for estimating malaria haplotype frequencies from blood samples
Smith Thomas A
2008-07-01
Full Text Available Abstract Background Molecular markers, particularly those associated with drug resistance, are important surveillance tools that can inform policy choice. People infected with falciparum malaria often contain several genetically-distinct clones of the parasite; genotyping the patients' blood reveals whether or not the marker is present (i.e. its prevalence, but does not reveal its frequency. For example a person with four malaria clones may contain both mutant and wildtype forms of a marker but it is not possible to distinguish the relative frequencies of the mutant and wildtypes i.e. 1:3, 2:2 or 3:1. Methods An appropriate method for obtaining frequencies from prevalence data is by Maximum Likelihood analysis. A computer programme has been developed that allows the frequency of markers, and haplotypes defined by up to three codons, to be estimated from blood phenotype data. Results The programme has been fully documented [see Additional File 1] and provided with a user-friendly interface suitable for large scale analyses. It returns accurate frequencies and 95% confidence intervals from simulated dataset sets and has been extensively tested on field data sets. Additional File 1 User manual for MalHaploFreq. Click here for file Conclusion The programme is included [see Additional File 2] and/or may be freely downloaded from 1. It can then be used to extract molecular marker and haplotype frequencies from their prevalence in human blood samples. This should enhance the use of frequency data to inform antimalarial drug policy choice. Additional File 2 executable programme compiled for use on DOS or windows Click here for file
JIN Zhi; SU Yong-bo; CHENG Wei; LIU Xin-Yu; XU An-Huai; QI Ming
2008-01-01
@@ A four-finger InGaAs/InP double heterojunction bipolar transistor is designed and fabricated successfully by using planarization technology. The emitter area of each finger is 1 × 15 μm2. The breakdown voltage is more than 7V, the maximum collector current could be more than 100mA. The current gain cutoff frequency is as high as 155 GHz and the maximum oscillation frequency reaches 253 GHz. The heterostructure bipolar transistor can offer more than 70mW class-A maximum output power at W band and the maximum power density can be as high as 1.2 W/mm.
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Designing waveforms for temporal encoding using a frequency sampling method
Gran, Fredrik; Jensen, Jørgen Arendt
2007-01-01
, the amplitude spectrum of the transmitted waveform can be optimized, such that most of the energy is transmitted where the transducer has large amplification. To test the design method, a waveform was designed for a BK8804 linear array transducer. The resulting nonlinear frequency modulated waveform...... for the linear frequency modulated signal) were tested for both waveforms in simulation with respect to the Doppler frequency shift occurring when probing moving objects. It was concluded that the Doppler effect of moving targets does not significantly degrade the filtered output. Finally, in vivo measurements...
Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser
2014-06-07
Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results.
Efficient estimation for ergodic diffusions sampled at high frequency
Sørensen, Michael
of estimators including most of the pre- viously proposed estimators for diffusion processes, for instance GMM-estimators and the maximum likelihood estimator. Simple conditions are given that ensure rate optimality, where estimators of parameters in the diffusion coefficient converge faster than estimators...... of parameters in the drift coefficient, and for efficiency. The conditions turn out to be equal to those implying small Δ-optimality in the sense of Jacobsen and thus gives an interpretation of this concept in terms of classical sta- tistical concepts. Optimal martingale estimating functions in the sense...... of Godambe and Heyde are shown to be give rate optimal and efficient estimators under weak conditions....
High frequency sampling of a continuous-time ARMA process
Brockwell, Peter J; Klüppelberg, Claudia
2011-01-01
Continuous-time autoregressive moving average (CARMA) processes have recently been used widely in the modeling of non-uniformly spaced data and as a tool for dealing with high-frequency data of the form $Y_{n\\Delta}, n=0,1,2,...$, where $\\Delta$ is small and positive. Such data occur in many fields of application, particularly in finance and the study of turbulence. This paper is concerned with the characteristics of the process $(Y_{n\\Delta})_{n\\in\\bbz}$, when $\\Delta$ is small and the underlying continuous-time process $(Y_t)_{t\\in\\bbr}$ is a specified CARMA process.
Kojima, M; Ichiyama, M; Ban, T
1986-07-01
Phenytoin, at 50 to 200 micrograms reduced the maximum upstroke velocity of action potentials (Vmax) with increases in frequency from 0.25 to 5 Hz and in the external potassium concentration [( K+]0) from 2.7 to 8.1 mM. The drug-induced shortening of action potential duration was evident at 0.25 to 2 Hz but little at 3 to 5 Hz. Time courses of recovery of Vmax was studied by applying premature responses between the conditioning responses at 1 Hz both in control and in drug-treated preparations. Concerning the time courses of the difference between the Vmax values before and after drug treatments at the same diastolic interval, with increases in drug concentrations the intercepts at APD90 were increased but the time constants were not changed or slightly decreased in 8.1 to 5.4 mM [K+]0, whereas they were increased in 2.7 mM [K+]0. To understand the kinetic behavior of this drug on sodium channels, rate constants for the interaction of phenytoin with three states of channels in terms of Hondeghem-Katzung model were estimated from the above experiments of Vmax. The model most consistent with the present experiments was that with an affinity for inactivated channels 20 times greater than that for resting channels and with a minor affinity for open channels. Phenytoin produced a delay in the time course of recovery of overshoot and action potential duration at 0 mV (APD0), suggesting an additional inhibition of the slow channel by this drug.
Evaluation of the Frequency for Gas Sampling for the High Burnup Confirmatory Data Project
Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marschman, Steven C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-05-01
This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations.
Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.
2010-05-01
One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives.
Rezaeian Mahdi
2015-01-01
Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.
Stray-insensitive sample-delay-hold buffers for high-frequency switched-capacitor filters
Rijns, J.J.F.; Rijns, J.J.F.; Wallinga, Hans
1991-01-01
Two high-frequency switched-capacitor sample-delay-hold (SDH) buffers are presented. The circuits provide a correct transition from the continuous-time to the discrete-time domain or vice versa. Experimental results show an excellent frequency behavior for clock frequencies up to 25 MHz.
Stray-insensitive switched-capacitor sample-delay-hold buffers for video frequency applications
Rijns, J.J.F.; Rijns, J.J.F.; Wallinga, Hans
1991-01-01
Two video frequency switched-capacitor sample-delay-hold (SDH) buffers are presented. The circuits provide a correct transition from the continuous-time to the discrete-time domain or vice versa. Experimental results show an excellent frequency behaviour for clock frequencies up to 25 MHz
Chen, Po-Chun; Wang, Yuan-Heng; You, Gene Jiing-Yun; Wei, Chih-Chiang
2017-02-01
Future climatic conditions likely will not satisfy stationarity assumption. To address this concern, this study applied three methods to analyze non-stationarity in hydrologic conditions. Based on the principle of identifying distribution and trends (IDT) with time-varying moments, we employed the parametric weighted least squares (WLS) estimation in conjunction with the non-parametric discrete wavelet transform (DWT) and ensemble empirical mode decomposition (EEMD). Our aim was to evaluate the applicability of non-parameter approaches, compared with traditional parameter-based methods. In contrast to most previous studies, which analyzed the non-stationarity of first moments, we incorporated second-moment analysis. Through the estimation of long-term risk, we were able to examine the behavior of return periods under two different definitions: the reciprocal of the exceedance probability of occurrence and the expected recurrence time. The proposed framework represents an improvement over stationary frequency analysis for the design of hydraulic systems. A case study was performed using precipitation data from major climate stations in Taiwan to evaluate the non-stationarity of annual maximum daily precipitation. The results demonstrate the applicability of these three methods in the identification of non-stationarity. For most cases, no significant differences were observed with regard to the trends identified using WLS, DWT, and EEMD. According to the results, a linear model should be able to capture time-variance in either the first or second moment while parabolic trends should be used with caution due to their characteristic rapid increases. It is also observed that local variations in precipitation tend to be overemphasized by DWT and EEMD. The two definitions provided for the concept of return period allows for ambiguous interpretation. With the consideration of non-stationarity, the return period is relatively small under the definition of expected
Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert
2014-02-01
The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.
Minimum sample frequency for multichannel intraluminal impedance measurement of the oesophagus
Bredenoord, AJ; Weusten, BLAM; Timmer, R; Smout, AJPM
2004-01-01
In all systems for impedance monitoring signals are stored in digital format after analog-to-digital conversion at a predefined rate, the sample frequency. We aimed to find the minimum sample frequency required to evaluate oesophageal transit and gastro-oesophageal reflux studies using impedance mon
Sampling frequency affects the processing of Actigraph raw acceleration data to activity counts
Brond, J. C.; Arvidsson, D.
2016-01-01
the amount of activity counts generated was less, indicating that raw data stored in the GT3X+ monitor is processed. Between 600 and 1,600 more counts per minute were generated with the sampling frequencies 40 and 100 Hz compared with 30 Hz during running. Sampling frequency affects the processing of Acti...
Sarapultseva, E I; Igolkina, J V; Litovchenko, A V
2009-04-01
Electromagnetic radiation at the mobile connection frequency (1 GHz) at maximum energy flow density (10 microW/cm(2)) permitted in Russia causes serious functional disorders in the studied unicellular hydrobionts infusoria Spirostomum ambiguum: reduction of their spontaneous motor activity. The form of biological reaction is uncommon: the effect is threshold, overall, and does not depend on the duration of microwave exposure.
Dynamic groundwater monitoring networks: a manageable method for reviewing sampling frequency.
Moreau-Fournier, Magali F; Daughney, Christopher J
2012-12-01
Optimization of a water quality network through a change in sampling frequency is the only way to increase cost-efficiency without any reduction in the robustness of the data. Existing techniques define optimal sampling frequency based on analysis of historical data from the monitoring network under investigation. Their application to a large network comprised of many sites and many monitored parameters is both technical and challenging. This paper presents a simple non-parametric method for reviewing sampling frequency that is consistent with highly censored environmental data and oriented towards reduction of sampling frequency as a cost-saving measure. Based on simple descriptive statistics, the method is applicable to large networks with long time series and many monitored parameters. The method also provides metrics for interpretation of newly collected data, which enables identification of sites for which a future change in sampling frequency may be necessary, ensuring that the monitoring network is both current and adaptive. Application of this method to the New Zealand National Groundwater Monitoring Programme indicates that reduction of sampling frequency at any site would result in a significant loss of information. This paper also discusses the potential for reducing analysis frequency as an alternative to reduction of sampling frequency.
Passive ultrasonics using sub-Nyquist sampling of high-frequency thermal-mechanical noise.
Sabra, Karim G; Romberg, Justin; Lani, Shane; Degertekin, F Levent
2014-06-01
Monolithic integration of capacitive micromachined ultrasonic transducer arrays with low noise complementary metal oxide semiconductor electronics minimizes interconnect parasitics thus allowing the measurement of thermal-mechanical (TM) noise. This enables passive ultrasonics based on cross-correlations of diffuse TM noise to extract coherent ultrasonic waves propagating between receivers. However, synchronous recording of high-frequency TM noise puts stringent requirements on the analog to digital converter's sampling rate. To alleviate this restriction, high-frequency TM noise cross-correlations (12-25 MHz) were estimated instead using compressed measurements of TM noise which could be digitized at a sampling frequency lower than the Nyquist frequency.
Mathe, Laszlo; Iov, Florin; Sera, Dezso
2014-01-01
The accurate tracking of phase, frequency, and amplitude of different frequency components from a measured signal is an essential requirement for many digitally controlled equipment. The accurate and robust tracking of a frequency component from a complex signal was successfully applied for example...... signal is rich in harmonics and the sampling frequency is close to the tracked frequency component. In this paper different discretization methods and implementation issues, such as Tustin, Backward-Forward Euler, are discussed and compared. A special case is analyzed, when the input signal is reach...
FREQUENCY ESTIMATION OF SINUSIODE FROM WIDEBAND USING SUB-NYQUIST SAMPLING
无
2006-01-01
A novel frequency estimation algorithm for wideband signal with sub-Nyquist sampling is pro-posed in this paper. With the aid of information provided by the auxiliary delayed sampling channel and thealiased frequency estimation for wideband signal with sub-Nyquist sampling, the frequency aliasing due tosub-Nyquist sampling can be solved. This method can reduce the complexity of the overall hardware at thecost of an auxiliary sampling channel. Furthermore, in order to alleviate the computation burden for its practi-cability, a more simplified algorithm is put forward and its validity is proved by our numerical simulation re-sults. The Cramer-Rao Lower Bound (CRLB) of the frequency estimation is also derived at the end of this pa-per.
Draft evaluation of the frequency for gas sampling for the high burnup confirmatory data project
Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-03-26
This report fulfills the M3 milestone M3FT-15SN0802041, “Draft Evaluation of the Frequency for Gas Sampling for the High Burn-up Storage Demonstration Project” under Work Package FT-15SN080204, “ST Field Demonstration Support – SNL”. This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations. Gas sampling will provide information on the presence of residual water (and byproducts associated with its reactions and decomposition) and breach of cladding, which could inform the decision of when to open the project cask.
Burcharth, Hans F.; Andersen, Thomas Lykke; Meinert, Palle
2008-01-01
This paper discusses the influence of wave load sampling frequency on calculated sliding distance in an overall stability analysis of a monolithic caisson. It is demonstrated by a specific example of caisson design that for this kind of analyses the sampling frequency in a small scale model could...... be as low as 100 Hz in model scale. However, for design of structure elements like the wave wall on the top of a caisson the wave load sampling frequency must be much higher, in the order of 1000 Hz in the model. Elastic-plastic deformations of foundation and structure were not included in the analysis....
The application of sample-and-hold circuits in the laser frequency-shifting
Shuyu Zhou; Shanyu Zhou; Yuzhu Wang
2005-01-01
@@ A new method of frequency-shifting for a diode laser is realized. Using a sample-and-hold circuit, the error signal can be held by the circuit during frequency shifting. It can avoid the restraint of locking or even lock-losing caused by the servo circuit when we input a step-up voltage into piezoelectric transition (PZT)to achieve laser frequency-shifting.
Unfolded Frequency Response and Model of a Multi-Tap Direct Sampling Mixer
PAN Yun; GE Ning; DONG Zaiwang
2008-01-01
A transform method was used to model a discrete time multi-tap direct sampling mixer. The method transforms the mixed filtering and down.sampling stages to separate cascade filtering and sampling stages to determine the unfolded frequency response which shows the anti-aliasing ability of the mixer. The transformation can also be applied to other mixed signal and multi-rate receiver systems to analyze their unfolded frequency responses. The transformed system architecture was used to calculate the unfolded frequency response of the multi-tap direct sampling mixer and compared with the mixer model without noise in the ad-vanced design system 2005A environment to further evaluate the frequency response. The simulations show that the -3 dB bandwidth is 3.0 MHz and the voltage gain is attenuated by 1.5 dB within a 1-MHz baseband bandwidth.
Xu, Henglong; Yong, Jiang; Xu, Guangjian
2015-12-30
Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Frequency Mixing Magnetic Detection Scanner for Imaging Magnetic Particles in Planar Samples.
Hong, Hyobong; Lim, Eul-Gyoon; Jeong, Jae-Chan; Chang, Jiho; Shin, Sung-Woong; Krause, Hans-Joachim
2016-06-09
The setup of a planar Frequency Mixing Magnetic Detection (p-FMMD) scanner for performing Magnetic Particles Imaging (MPI) of flat samples is presented. It consists of two magnetic measurement heads on both sides of the sample mounted on the legs of a u-shaped support. The sample is locally exposed to a magnetic excitation field consisting of two distinct frequencies, a stronger component at about 77 kHz and a weaker field at 61 Hz. The nonlinear magnetization characteristics of superparamagnetic particles give rise to the generation of intermodulation products. A selected sum-frequency component of the high and low frequency magnetic field incident on the magnetically nonlinear particles is recorded by a demodulation electronics. In contrast to a conventional MPI scanner, p-FMMD does not require the application of a strong magnetic field to the whole sample because mixing of the two frequencies occurs locally. Thus, the lateral dimensions of the sample are just limited by the scanning range and the supports. However, the sample height determines the spatial resolution. In the current setup it is limited to 2 mm. As examples, we present two 20 mm × 25 mm p-FMMD images acquired from samples with 1 µm diameter maghemite particles in silanol matrix and with 50 nm magnetite particles in aminosilane matrix. The results show that the novel MPI scanner can be applied for analysis of thin biological samples and for medical diagnostic purposes.
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2016-11-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.
Howarth, Samuel J; Callaghan, Jack P
2009-10-01
The influence of signal sampling frequency and the low-pass digital filter cutoff frequency on the minimum number of padding points when applied to kinematic data are factors often absent in data processing descriptions. This investigation determined a relationship between the number of padding points and the ratio of filter cutoff to signal sampling frequencies (f(c)/f(s)). Two kinematic recordings were used which represented signals with high and low deterministic variation magnitudes at the signals' beginning. Signal sampling rates (40-128 Hz) were generated at intervals of 1 Hz. Filter cutoff frequency was iterated from 2 to 10 Hz at 0.5 Hz intervals. Data extrapolation was performed using three different techniques (first order polynomial, third order polynomial, and data reflection). A maximum of 2s of padding points were added to the beginning of each test signal which was then dual-pass filtered using a second order Butterworth filter. For each successive increase in the number of padding points, the filtered test signal was compared to a criterion signal and the root mean square difference (RMSD) over the first second was calculated. The number of padding points required to attain a constant RMSD was recorded as the minimum number of padding points needed for that ratio of filter cutoff to sampling frequency. As f(c)/f(s) increased, the number of padding points decreased non-linearly. More padding points were required for the signal with higher deterministic variation at the beginning than the signal with lower deterministic variation. Additional padding points (beyond the determined minimum) did not further reduce the RMSD. The largest temporal extrapolation determined by the algorithm to produce a stable RMSD was 1s. It is suggested that a minimum of 1s of extraneous data be used when using a low-pass recursive digital filter to remove noise from kinematic data.
Calculating of river water quality sampling frequency by the analytic hierarchy process (AHP).
Do, Huu Tuan; Lo, Shang-Lien; Phan Thi, Lan Anh
2013-01-01
River water quality sampling frequency is an important aspect of the river water quality monitoring network. A suitable sampling frequency for each station as well as for the whole network will provide a measure of the real water quality status for the water quality managers as well as the decision makers. The analytic hierarchy process (AHP) is an effective method for decision analysis and calculation of weighting factors based on multiple criteria to solve complicated problems. This study introduces a new procedure to design river water quality sampling frequency by applying the AHP. We introduce and combine weighting factors of variables with the relative weights of stations to select the sampling frequency for each station, monthly and yearly. The new procedure was applied for Jingmei and Xindian rivers, Taipei, Taiwan. The results showed that sampling frequency should be increased at high weighted stations while decreased at low weighted stations. In addition, a detailed monitoring plan for each station and each month could be scheduled from the output results. Finally, the study showed that the AHP is a suitable method to design a system for sampling frequency as it could combine multiple weights and multiple levels for stations and variables to calculate a final weight for stations, variables, and months.
Effect of sampling frequency on the measurement of phase-locked action potentials.
Go eAshida
2010-09-01
Full Text Available Phase-locked spikes in various types of neurons encode temporal information. To quantify the degree of phase-locking, the metric called vector strength (VS has been most widely used. Since VS is derived from spike timing information, error in measurement of spike occurrence should result in errors in VS calculation. In electrophysiological experiments, the timing of an action potential is detected with finite temporal precision, which is determined by the sampling frequency. In order to evaluate the effects of the sampling frequency on the measurement of VS, we derive theoretical upper and lower bounds of VS from spikes collected with finite sampling rates. We next estimate errors in VS assuming random sampling effects, and show that our theoretical calculation agrees with data from electrophysiological recordings in vivo. Our results provide a practical guide for choosing the appropriate sampling frequency in measuring VS.
Sampling Frequency Offset Estimation Methods for DVB-T/H Systems
Kyung Hoon Won
2010-03-01
Full Text Available A precise estimation and compensation of SFO (Sampling Frequency Offset is an important issue in OFDM (Orthogonal Frequency Division Multiplexing system, because sampling frequency mismatch between the transmitter and the receiver dramatically degrades the system performance due to the loss of orthogonality between the subcarriers. However, the conventional method causes serious performance degradation of SFO estimation in low SNR (Signal to Noise Ratio or large Doppler frequency environment. Therefore, in this paper, we propose two SFO estimation methods which can achieve stable operation in low SNR and large Doppler frequency environment. The proposals for SFO estimation / compensation are mainly specialized on DVB (Digital Video Broadcasting system, and we verified that the proposed method has good performance and stable operation through extensive simulation.
Takemura, Shunsuke; Kobayashi, Manabu; Yoshimoto, Kazuo
2016-10-01
Frequency-dependent model of the apparent radiation pattern has been extensively incorporated into engineering and scientific applications for high-frequency seismic waves, but distance-dependent properties have not yet been fully taken into account. We investigated the unified characteristics of frequency and distance dependences in both apparent P- and S-wave radiation patterns during local crustal earthquakes. Observed distortions of the apparent P- and S-wave radiation patterns could be simply modeled by using a function of the normalized hypocentral distance, which is a product of the wave number and hypocentral distance. This behavior suggests that major cause of distortion of the apparent radiation pattern is seismic wave scattering and diffraction within the heterogeneous crust. On the basis of observed normalized hypocentral distance dependency, we proposed a method for prediction of spatial distributions of maximum P- and S-wave amplitudes. Our method incorporating normalized hypocentral distance dependence of the apparent radiation pattern reproduced the observed spatial distributions of maximum P- and S-wave amplitudes over a wide frequency and distance ranges successfully.[Figure not available: see fulltext.
Using high-frequency sampling to detect effects of atmospheric pollutants on stream chemistry
Stephen D. Sebestyen; James B. Shanley; Elizabeth W. Boyer
2009-01-01
We combined information from long-term (weekly over many years) and short-term (high-frequency during rainfall and snowmelt events) stream water sampling efforts to understand how atmospheric deposition affects stream chemistry. Water samples were collected at the Sleepers River Research Watershed, VT, a temperate upland forest site that receives elevated atmospheric...
Igne, Benoît; Arai, Hiroaki; Drennen, James K; Anderson, Carl A
2016-09-01
While the sampling of pharmaceutical products typically follows well-defined protocols, the parameterization of spectroscopic methods and their associated sampling frequency is not standard. Whereas, for blending, the sampling frequency is limited by the nature of the process, in other processes, such as tablet film coating, practitioners must determine the best approach to collecting spectral data. The present article studied how sampling practices affected the interpretation of the results provided by a near-infrared spectroscopy method for the monitoring of tablet moisture and coating weight gain during a pan-coating experiment. Several coating runs were monitored with different sampling frequencies (with or without co-adds (also known as sub-samples)) and with spectral averaging corresponding to processing cycles (1 to 15 pan rotations). Beyond integrating the sensor into the equipment, the present work demonstrated that it is necessary to have a good sense of the underlying phenomena that have the potential to affect the quality of the signal. The effects of co-adds and averaging was significant with respect to the quality of the spectral data. However, the type of output obtained from a sampling method dictated the type of information that one can gain on the dynamics of a process. Thus, different sampling frequencies may be needed at different stages of process development.
Impact of sampling frequency in the analysis of tropospheric ozone observations
M. Saunois
2012-08-01
Full Text Available Measurements of ozone vertical profiles are valuable for the evaluation of atmospheric chemistry models and contribute to the understanding of the processes controlling the distribution of tropospheric ozone. The longest record of ozone vertical profiles is provided by ozone sondes, which have a typical frequency of 4 to 12 profiles a month. Here we quantify the uncertainty introduced by low frequency sampling in the determination of means and trends. To do this, the high frequency MOZAIC (Measurements of OZone, water vapor, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft profiles over airports, such as Frankfurt, have been subsampled at two typical ozone sonde frequencies of 4 and 12 profiles per month. We found the lowest sampling uncertainty on seasonal means at 700 hPa over Frankfurt, with around 5% for a frequency of 12 profiles per month and 10% for a 4 profile-a-month frequency. However the uncertainty can reach up to 15 and 29% at the lowest altitude levels. As a consequence, the sampling uncertainty at the lowest frequency could be higher than the typical 10% accuracy of the ozone sondes and should be carefully considered for observation comparison and model evaluation. We found that the 95% confidence limit on the seasonal mean derived from the subsample created is similar to the sampling uncertainty and suggest to use it as an estimate of the sampling uncertainty. Similar results are found at six other Northern Hemisphere sites. We show that the sampling substantially impacts on the inter-annual variability and the trend derived over the period 1998–2008 both in magnitude and in sign throughout the troposphere. Also, a tropical case is discussed using the MOZAIC profiles taken over Windhoek, Namibia between 2005 and 2008. For this site, we found that the sampling uncertainty in the free troposphere is around 8 and 12% at 12 and 4 profiles a month respectively.
Practical iterative learning control with frequency domain design and sampled data implementation
Wang, Danwei; Zhang, Bin
2014-01-01
This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...
ARUN, K.
2016-05-01
Full Text Available A modified digital signal processing procedure is described for the on-line estimation of DC, fundamental and harmonics of periodic signal. A frequency locked loop (FLL incorporated within the parallel structure of observers is proposed to accommodate a wide range of frequency drift. The error in frequency generated under drifting frequencies has been used for changing the sampling frequency of the composite observer, so that the number of samples per cycle of the periodic waveform remains constant. A standard coupled oscillator with automatic gain control is used as numerically controlled oscillator (NCO to generate the enabling pulses for the digital observer. The NCO gives an integer multiple of the fundamental frequency making it suitable for power quality applications. Another observer with DC and second harmonic blocks in the feedback path act as filter and reduces the double frequency content. A systematic study of the FLL is done and a method has been proposed to design the controller. The performance of FLL is validated through simulation and experimental studies. To illustrate applications of the new FLL, estimation of individual harmonics from nonlinear load and the design of a variable sampling resonant controller, for a single phase grid-connected inverter have been presented.
Wahab, M Farooq; Dasgupta, Purnendu K; Kadjo, Akinde F; Armstrong, Daniel W
2016-02-11
With increasingly efficient columns, eluite peaks are increasingly narrower. To take full advantage of this, choice of the detector response time and the data acquisition rate a.k.a. detector sampling frequency, have become increasingly important. In this work, we revisit the concept of data sampling from the theorem variously attributed to Whittaker, Nyquist, Kotelnikov, and Shannon. Focusing on time scales relevant to the current practice of high performance liquid chromatography (HPLC) and optical absorbance detection (the most commonly used method), even for very narrow simulated peaks Fourier transformation shows that theoretical minimum sampling frequency is still relatively low (fast chromatography on a state-of-the-art column (38,000 plates), we evaluate the responses produced by different present generation instruments, each with their unique black box digital filters. We show that the common wisdom of sampling 20 points per peak can be inadequate for high efficiency columns and that the sampling frequency and response choices do affect the peak shape. If the sampling frequency is too low or response time is too large, the observed peak shapes will not remain as narrow as they really are - this is especially true for high efficiency and high speed separations. It is shown that both sampling frequency and digital filtering affect the retention time, noise amplitude, peak shape and width in a complex fashion. We show how a square-wave driven light emitting diode source can reveal the nature of the embedded filter. We discuss time uncertainties related to the choice of sampling frequency. Finally, we suggest steps to obtain optimum results from a given system.
Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples
Liu Xin
2015-09-01
Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.
Impact of sampling frequency in the analysis of tropospheric ozone observations
M. Saunois
2011-10-01
Full Text Available The measurements of the ozone vertical profiles are valuable for the evaluation of atmospheric chemistry models and contribute to the understanding of the processes controlling the distribution of tropospheric ozone. The longest record of the ozone vertical profiles is provided by ozone sondes, which have a low time resolution with a typical frequency of 12 or 4 profiles a month. Here we discuss and quantify the uncertainty in the analysis of such data sets using high frequency MOZAIC (Measurements of OZone, water vapor, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft profiles data sets, such as the one over Frankfurt. We subsampled the MOZAIC data set at the two typical ozone sonde frequencies. We find that the uncertainty introduced by the coarser sampling is around 8% for a 12 profiles a month frequency (14% for a 4 profiles a month frequency in the free troposphere over Frankfurt. As a consequence, this uncertainty at the lowest frequency is higher than the typical 10% accuracy of the ozone sondes and should be carefully considered for observation comparison and model evaluation. We found that the average intra-seasonal variability represented in the samples is similar to the sampling uncertainty and could also be used as an estimate of the sampling error in some Northern Hemisphere cases. The sampling impacts substantially the inter annual variability and the trend derived over the period 1995–2008 both in magnitude and in sign throughout the troposphere. Therefore, the sampling effect could be part of the observed discrepancies between European sites. Similar results regarding the sampling uncertainty are found at five other Northern Hemispheric sites. Also, a tropical case is discussed using the MOZAIC profiles taken over Windhoek, Namibia between 2005 and 2008.
Chebakova, V. Ju; Gaisin, A. F.; Zheltukhin, V. S.
2016-11-01
The numerical study of interaction between the capacitive coupled radio frequency (CCRF) discharge and materials is performed. A nonlinear problem, which includes initialboundary value problems for electron, ion, neutral atom, metastable atom, gas temperature and Poisson's equation is solved. A harmonic voltage on the loaded electrodes and Ohm's law for the sample is assumed. A results of calculations of the model problem at pressure p=760 Torr, frequency of generator f=13.76 MHz in local approximation are presented.
Note: A sub-sampling technique for frequency locking in Doppler wind lidar.
Yao, Yuan; Li, Feng; Chen, Lian; Jin, Ge
2016-05-01
Double-edge technique is employed in Doppler wind lidar for detecting the Doppler frequency shift. A dedicated locking channel, employing one channel of a triple Fabry-Perot etalon, is designed to compensate for the effects caused by the frequency drift of outgoing laser. Agilent Oscilloscopes, with a sampling rate of 2.5 GSPS, are employed to obtain accurate amplitudes of the narrow pulses in existing experiments. In order to achieve the requirement of real-time ability and integration, a sub-sampling technique based on the theory of statistics is presented. With the technique, the drift can be acquired at a sub-sampling rate, 250 MSPS. A prototype is designed and the test results show that the prototype, providing real-time ability and better integration, has a comparable performance as the oscilloscope for frequency locking.
A Frequency Matching Method for Generation of a Priori Sample Models from Training Images
Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan
2011-01-01
new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically......This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... indistinguishable from the training image....
An extension of command shaping methods for controlling residual vibration using frequency sampling
Singer, Neil C.; Seering, Warren P.
1992-01-01
The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.
An extension of command shaping methods for controlling residual vibration using frequency sampling
Singer, Neil C.; Seering, Warren P.
1992-01-01
The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Ramachandran, Hema; Pillai, K. P. P.; Bindu, G. R.
2016-08-01
A two-port network model for a wireless power transfer system taking into account the distributed capacitances using PP network topology with top coupling is developed in this work. The operating and maximum power transfer efficiencies are determined analytically in terms of S-parameters. The system performance predicted by the model is verified with an experiment consisting of a high power home light load of 230 V, 100 W and is tested for two forced resonant frequencies namely, 600 kHz and 1.2 MHz. The experimental results are in close agreement with the proposed model.
Hartwig, V; Giovannetti, G; Vanello, N; Costantino, M; Landini, L; Benassi, A
2006-01-01
An electrodeless measurement system based on a resonant circuit is proposed for the measurement of dielectric properties of liquid samples at RF (radio frequency). Generally, properties as dielectric constant, loss factor and conductivity are measured by parallel plate capacitor cells: this method has several limitations in the case of particular liquid samples and in the range of radiofrequencies. Our method is based on the measurements of resonance frequency and quality factor of a LC resonant circuit in different measuring conditions, without and with the liquid sample placed inside a test tube around which the home made coil is wrapped. The measurement is performed using a network analyzer and a dual loop probe, inductively coupled with the resonant circuit. One of the advantages of this method is the contactless between the liquid sample and the measurement electrodes. In this paper the measurement system is described and test measurements of conventional liquids dielectric properties are reported.
Moon, Il-Ju; Kim, Sung-Hun; Klotzbach, Phil; Chan, Johnny C. L.
2016-06-01
Recently a pronounced global poleward shift in the latitude at which the maximum intensities of tropical cyclones (TC) occur has been identified. Moon et al (2015 Environ. Res. Lett. 10 104004) reported that the poleward migration is significantly influenced by changes in interbasin frequency. These frequency changes are a larger contributor to the poleward shift than the intrabasin migration component. The strong role of interbasin frequency changes in the poleward migration also suggest that the poleward trend could be changed to an opposite equatorward trend in the future due to multi-decadal variability that significantly impacts Northern Hemisphere TC frequency. In the accompanying comment, Kossin et al (2016 Environ. Res. Lett. 11 068001) questioned the novelty and robustness of our results by raising issues associated with subsampling, contributions from some basins to poleward migration, and data dependency. Here, we explain the originality and importance of our main findings, which are different from those of Kossin et al (2014 Nature 509 349-52) and reaffirm that our conclusions are maintained regardless of the issues that were raised.
Sharifi-Rad, Javad; Hoseini Alfatemi, Seyedeh Mahsan; Sharifi-Rad, Mehdi; Miri, Abdolhossein
2015-01-01
Background: Viruses are one of the major reasons of gastrointestinal disease worldwide, and commonly infect children less than five years of age in developing countries. Objectives: The current study aimed to determine the frequency of adenoviruses, rotaviruses and noroviruses among diarrhea samples collected from infants of Zabol, south-east of Iran. This study is the first investigation of adenoviruses, rotaviruses and noroviruses among diarrhea samples in Zabol. Patients and Methods: In th...
Study on Calculation Methods for Sampling Frequency of Acceleration Signals in Gear System
Feibin Zhang
2013-01-01
Full Text Available The vibration acceleration signal mechanisms in normal and defect gears are studied. An improved bending-torsion vibration model is established, in which the effect of time-varying meshing stiffness and damping, torsional stiffness for transmission shaft, elastic bearing support, the driving motor, and external load are taken into consideration. Then, vibration signals are simulated based on the model under diverse sampling frequencies. The influences of input shaft's rotating frequency, the teeth number, and module of gears are investigated by the analysis of the simulation signals. Finally, formulas are proposed to calculate the acceleration signal bandwidth and the critical and recommended sampling frequencies of the gear system. The compatibility of the formulas is discussed when there is a crack in the tooth root. The calculation results agree well with the experiments.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Carcasses suspected of containing sulfa and antibiotic residues; sampling frequency; disposition of affected carcasses and parts. 310.21 Section 310.21 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF...
Vergara, Ismael A; Villouta, Pamela; Herrera, Sandra; Melo, Francisco
2012-05-01
The thirteen autosomal STR loci of the CODIS system were typed from DNA of 732 unrelated male individuals sampled from different locations in Chile. This is the first report of allele frequencies for the thirteen STRs loci defined in the CODIS system from the Chilean population.
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Radio Frequency Surface Impedance Characterization System for Superconducting Samples at 7.5 GHz
Binping Xiao, Charles Reece, Michael Kelley, Larry Phillips, Rongli Geng, Haipeng Wang, Frank Marhauser
2011-05-01
A radio frequency (RF) surface impedance characterization (SIC) system that uses a sapphire-loaded Nb cavity operating at 7.5 GHz has been fabricated to measure the RF surface impedance of flat superconducting samples. Currently, the SIC system can make direct calorimetric surface impedance measurements in the central 0.8 cm2 area of 5 cm diameter disk samples in a temperature range from 2 to 20 K, exposed to a magnetic flux density of up to 14 mT. As an application, we present the measurement results for a bulk Nb sample.
[Frequency of dermatophytes in a sample of cats in the urban area of Gran Mendoza, Argentina].
López, María Florencia; Grilli, Diego; Degarbo, Stella; Arenas, Graciela; Telechea, Adriana
2012-01-01
The cat, considered the main reservoir of Microsporum canis, lives in urban areas, and also plays an important role in the emergence of dermatomycoses. To determine and analyse the frequency of zoonotic dermatophytes in a sample of cats in an urban area of the Gran Mendoza region. The animals selected were household cats and cats less than one year old that came from shelters and kennels from urban areas in the Gran Mendoza region. A total of 45 samples from cats with and without dermatological lesions were analysed. These samples were collected through skin scraping, hair removal and Mackenzie brush, respectively. Direct observation was made with KOH and glycerol after heat exposure. Samples were cultured on Sabouraud and Lactrimel agar slants with chloramphenicol and cycloheximide for 30 days. The frequency of dermatophytes isolated in this preliminary study was 13.3%. There were not statistically significant differences by source, age, sex, race or dermatological condition. Zoonotic dermatophytes were found in 2 household cats out of the 21 that had direct contact with children or the elderly. M. canis was isolated in 83.3% cases. The frequency of isolation of zoonotic dermatophytes in the sample of cats in an urban area of the Gran Mendoza region was 13.3%, a value higher than expected. M. canis was the most isolated species. Copyright © 2011 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.
Panzeri, R.; Saggin, S.; Scaccabarozzi, D.; Tarabini, M.
2016-10-01
This paper compares different data processing techniques for FTS with the aim of assessing the feasibility of a spectrometer leveraging on standard DAC boards, without dedicated hardware for sampling and speed control of the moving mirrors. Fourier transform spectrometers rely on the sampling of the interferogram at constant steps of the optical path difference (OPD) to evaluate the spectra through standard discrete Fourier transform. Constant OPD sampling is traditionally achieved with dedicated hardware but, recently, sampling methods based on the use of common analog to digital converters with large dynamic range and high sampling frequency have become viable when associated with specific data processing techniques. These methods offer advantages from the point of view of insensitivity to disturbances, in particular mechanical vibrations, and should be less sensitive to OPD speed errors. In this work the performances of three algorithms, two taken from literature based on phase demodulation of a reference interferogram have been compared with a method based on direct phase computation of the reference interferogram in terms of robustness against mechanical vibrations and OPD speed errors. All methods provided almost correct spectra with vibrations amplitudes up to 10% of the average OPD speed and speed drifts within the scan up to 20% of the average, as long as the disturbance frequency was lower than the reference signal nominal one. The developed method based on the arccosine function keeps working also with frequencies of the disturbances larger than the reference channel one, the common limit for the other two.
Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series
S. Roques
2005-09-01
Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.
Chen, Ming; He, Jing; Tang, Jin; Chen, Lin
2014-06-01
To improve the outage performance of an optical orthogonal frequency-division multiplexing (OFDM) system under the frequency offset between the sampling clocks in the transmitter and receiver, a pilot-aided sampling frequency offset (SFO) estimation and compensation scheme for the optical OFDM system with intensity-modulation and direct-detection (DD-OOFDM) is experimentally demonstrated. The experimental and simulated results show that the scheme can work effectively even with large sampling frequency offsets. In addition, it can achieve a good bit error rate (BER) performance without the sampling clock frequency synchronization in the receiver.
Sérgio Luiz Gomes Antunes
2012-03-01
Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes
2012-03-01
Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
Jakobsen, Nina Munkholt; Sørensen, Michael
Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...... estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. The asymptotic distributions of the estimators are shown to be normal variance-mixtures, where the mixing distribution generally depends on the full sample path of the diffusion...
Effect of Sampling Period on Flood Frequency Distributions in the Susquehanna Basin
Kargar, M.; Beighley, R. E.
2010-12-01
Flooding is a devastating natural hazard that claims many human lives and significantly impact regional economies each year. Given the magnitude of flooding impacts, significant resources are dedicated to the development of forecasting models for early warning and evacuation planning, construction of flood defenses (levees/dams) to limit flooding, and the design of civil infrastructure (bridges, culverts, storm sewers) to convey flood flows without failing. In all these cases, it is particularly important to understand the potential flooding risk in terms of both recurrence interval (i.e., return period) and magnitude. Flood frequency analysis (FFA) is a form of risk analysis used to extrapolate the return periods of floods beyond the gauged record. The technique involves using observed annual peak flow discharge data to calculate statistical information such as mean values, standard deviations, skewness, and recurrence intervals. Since discharge data for most catchments have been collected for periods of time less than 100 years, the estimation of the design discharge requires a degree of extrapolation. This study focuses on the assessment and modifications of flood frequency based discharges for sites with limited sampling periods. Here, limited sampling period is intended to capture two issues: (1) limited number of observations to adequately capture the flood frequency signal (i.e., minimum number of annual peaks needed) and (2) climate variability (i.e., sampling period contains primarily “wet” or “dry” periods only). Total of 34 gauges (more than 70 years of data) spread throughout the Susquehanna River basin (71,000 sq km) were used to investigate the impact of sampling period on flood frequency distributions. Data subsets ranging from 10 years to the total number of years available were created from the data for each gauging station. To estimate the flood frequency, the Log Pearson Type III distribution was fit to the logarithms of instantaneous
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun
2015-10-01
In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey's post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey's post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p = 0.000). Based on the results of
Karlsson, J S; Ostlund, N; Larsson, B; Gerdle, B
2003-10-01
Frequency analysis of myoelectric (ME) signals, using the mean power spectral frequency (MNF), has been widely used to characterize peripheral muscle fatigue during isometric contractions assuming constant force. However, during repetitive isokinetic contractions performed with maximum effort, output (force or torque) will decrease markedly during the initial 40-60 contractions, followed by a phase with little or no change. MNF shows a similar pattern. In situations where there exist a significant relationship between MNF and output, part of the decrease in MNF may per se be related to the decrease in force during dynamic contractions. This study estimated force effects on the MNF shifts during repetitive dynamic knee extensions. Twenty healthy volunteers participated in the study and both surface ME signals (from the right vastus lateralis, vastus medialis, and rectus femoris muscles) and the biomechanical signals (force, position, and velocity) of an isokinetic dynamometer were measured. Two tests were performed: (i) 100 repetitive maximum isokinetic contractions of the right knee extensors, and (ii) five gradually increasing static knee extensions before and after (i). The corresponding ME signal time-frequency representations were calculated using the continuous wavelet transform. Compensation of the MNF variables of the repetitive contractions was performed with respect to the individual MNF-force relation based on an average of five gradually increasing contractions. Whether or not compensation was necessary was based on the shape of the MNF-force relationship. A significant compensation of the MNF was found for the repetitive isokinetic contractions. In conclusion, when investigating maximum dynamic contractions, decreases in MNF can be due to mechanisms similar to those found during sustained static contractions (force-independent component of fatigue) and in some subjects due to a direct effect of the change in force (force-dependent component of fatigue
Reeves, Steven; Wang, Weijin; Salter, Barry; Halpin, Neil
2016-07-01
Nitrous oxide (N2O) emissions from soil are often measured using the manual static chamber method. Manual gas sampling is labour intensive, so a minimal sampling frequency that maintains the accuracy of measurements would be desirable. However, the high temporal (diurnal, daily and seasonal) variabilities of N2O emissions can compromise the accuracy of measurements if not addressed adequately when formulating a sampling schedule. Assessments of sampling strategies to date have focussed on relatively low emission systems with high episodicity, where a small number of the highest emission peaks can be critically important in the measurement of whole season cumulative emissions. Using year-long, automated sub-daily N2O measurements from three fertilised sugarcane fields, we undertook an evaluation of the optimum gas sampling strategies in high emission systems with relatively long emission episodes. The results indicated that sampling in the morning between 09:00-12:00, when soil temperature was generally close to the daily average, best approximated the daily mean N2O emission within 4-7% of the 'actual' daily emissions measured by automated sampling. Weekly sampling with biweekly sampling for one week after >20 mm of rainfall was the recommended sampling regime. It resulted in no extreme (>20%) deviations from the 'actuals', had a high probability of estimating the annual cumulative emissions within 10% precision, with practicable sampling numbers in comparison to other sampling regimes. This provides robust and useful guidance for manual gas sampling in sugarcane cropping systems, although further adjustments by the operators in terms of expected measurement accuracy and resource availability are encouraged. By implementing these sampling strategies together, labour inputs and errors in measured cumulative N2O emissions can be minimised. Further research is needed to quantify the spatial variability of N2O emissions within sugarcane cropping and to develop
Yamaura, Yuichi; Connor, Edward F; Royle, J Andrew; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio
2016-07-01
Models and data used to describe species-area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species-area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species-area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density-area relationships and occurrence probability-area relationships can alter the form of species-area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied to a
Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio
2016-01-01
Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied
Vibration Frequencies Extraction of the Forth Road Bridge Using High Sampling GPS Data
Jian Wang
2016-01-01
Full Text Available This paper proposes a scheme for vibration frequencies extraction of the Forth Road Bridge in Scotland from high sampling GPS data. The interaction between the dynamic response and the ambient loadings is carefully analysed. A bilinear Chebyshev high-pass filter is designed to isolate the quasistatic movements, the FFT algorithm and peak-picking approach are applied to extract the vibration frequencies, and a GPS data accumulation counter is suggested for real-time monitoring applications. To understand the change in the structural characteristics under different loadings, the deformation results from three different loading conditions are presented, that is, the ambient circulation loading, the strong wind under abrupt wind speed change, and the specific trial with two 40 t lorries passing the bridge. The results show that GPS not only can capture absolute 3D deflections reliably, but also can be used to extract the frequency response accurately. It is evident that the frequencies detected using the filtered deflection time series in different direction show quite different characteristics, and more stable results can be obtained from the height displacement time series. The frequency responses of 0.105 and 0.269 Hz extracted from the lateral displacement time series correlate well with the data using height displacement time series.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
Frequency-time coherence for all-optical sampling without optical pulse source
Preussler, Stefan; Schneider, Thomas
2016-01-01
Sampling is the first step to convert an analogue optical signal into a digital electrical signal. The latter can be further processed and analysed by well-known electrical signal processing methods. Optical pulse sources like mode-locked lasers are commonly incorporated for all-optical sampling, but have several drawbacks. A novel approach for a simple all-optical sampling is to utilise the frequency-time coherence of each signal. The method is based on only using two coupled modulators driven with an electrical sine wave, allowing simple integration in appropriate platforms, such as Silicon Photonics. The presented method grants all-optical sampling with electrically tunable bandwidth, repetition rate and time shift.
Frequency-time coherence for all-optical sampling without optical pulse source
Preußler, Stefan; Raoof Mehrpoor, Gilda; Schneider, Thomas
2016-09-01
Sampling is the first step to convert an analogue optical signal into a digital electrical signal. The latter can be further processed and analysed by well-known electrical signal processing methods. Optical pulse sources like mode-locked lasers are commonly incorporated for all-optical sampling, but have several drawbacks. A novel approach for a simple all-optical sampling is to utilise the frequency-time coherence of each signal. The method is based on only using two coupled modulators driven with an electrical sine wave. Since no optical source is required, a simple integration in appropriate platforms, such as Silicon Photonics might be possible. The presented method grants all-optical sampling with electrically tunable bandwidth, repetition rate and time shift.
Frequency of word occurrence in communication samples produced by adult communication aid users.
Beukelman, D R; Yorkston, K M; Poblete, M; Naranjo, C
1984-11-01
Communication samples generated by five nonspeaking adults using Canon Communicators were collected for 14 consecutive days. Samples were analyzed to determine frequency of word occurrence. A core vocabulary of the 500 most frequently occurring words was analyzed further to determine spelling level and proportion of complete communication samples represented by subsets of the core vocabulary list. The 500 core vocabulary words represented 80% of the total words in the combined communication samples for the 5 subjects. Of all messages generated by the subjects, 33% could be communicated in their entirety using words from the core vocabulary list. The communication of the remaining messages required one or more words in addition to the core vocabulary. The spelling grade level of the words in the core vocabulary list did not exceed the seventh grade. The implications of the results for designing and customizing communication aids and for potential user training are discussed.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.
2016-08-01
We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.
Jafari, AA. (PhD
2014-05-01
Full Text Available Background and Objective: laboratory personnel have always accidental exposure to clinical samples, which can cause the transmission of infection. This threat can be prevented and controlled by education for the use of safety instruments. The purpose was to determine the frequency of accidental exposure to laboratory samples among Yazd laboratory personnel in 2011. Material and Methods: This descriptive cross-sectional study was conducted on 100 of Yazd clinical laboratory personnel. The data was collected, using a valid and reliable questioner, via interview and analyzed by means of SPSS software. Results: Eighty-six percent of the subjects reported an experience of accidental exposure to clinical samples, such as blood, serum and urine. The causes were carelessness (41% and work overload (29%. Needle- stick was the most prevalent injury (52% particularly in sampler workers (51% and in their hands (69%. There wasn’t significant relationship between accidental exposure to laboratory samples and the variables such as private and governmental laboratories (p=0.517, kind of employment (p=0.411, record of services (p=0.439 and academic degree (p=0.454. The subjects aged 20-29 (p=0.034 and worked in sampling unit had the highest accidental exposure. Conclusion: based on the results, inexperience of the personnel especially in sampling room, overload at work and ignorance of applying safety instruments are known as the most important reasons for accidental exposure to clinical samples. Keywords: Contamination; accidental Exposure; Infectious agents; laboratory; personnel
Drought frequency analysis using stochastic simulation with maximum entropy model%基于最大熵分布模拟的干旱频率分析
张明; 金菊良; 王国庆; 周润娟
2013-01-01
为提高干旱频率分析结果的可靠性,提出了基于最大熵分布模拟的干旱频率分析方法.该模型首先用自回归模型剔除年径流量序列中的相依成分,分别计算年径流残差项序列各阶样本矩,并通过加速遗传算法求解获得残差项的最大熵概率分布函数,得到研究区域年径流量序列的最大熵分布随机模型；然后采用Monte Carlo随机模拟研究区域长、短序列的年径流量序列,在比较模拟效果的基础上,用轮次分析方法得到研究区域的干旱发生频率情况.区域干旱频率分析的实例研究结果显示,最大熵分布和P-Ⅲ分布的模拟结果在各统计特性上较为接近,充分说明最大熵分布模拟结果的准确性；窟野河流域温家川站10 000年的年径流量序列轮次分析结果表明,温家川站发生连续12年严重干旱事件的概率为2.6％,重现期为203年.基于最大熵分布模拟残差项序列,由于不事先假定理论分布线型,使得最大熵分布模拟结果更具有适用性,适合于处理水资源系统中各种降水、径流等模拟分析工作.%This paper develops a model for drought frequency analysis using maximum entropy distribution to improve the reliability of analysis. This model adopts three steps. First, it calculates the probability density function ( PDF) of the residual series of annual runoff after eliminating the dependent components with a auto-regression process. Second, it simulates the maximum entropy PDF of a purely random series generated by a Monte Carlo model with a rejection technique. Third, it calculates the negative run lengths for a simulated long-term annual runoff series, so that a frequency curve of these lengths was obtained and used in drought frequency analysis. Its application to the runoff at the Wenjiachuan station in the Kuye river basin indicates that its stochastic simulations are better than those with a P-Ⅲ distribution method. And on the basis of a
Automation of high-frequency sampling of environmental waters for reactive species
Kim, H.; Bishop, J. K.; Wood, T.; Fung, I.; Fong, M.
2011-12-01
Trace metals, particularly iron and manganese, play a critical role in some ecosystems as a limiting factor to determine primary productivity, in geochemistry, especially redox chemistry as important electron donors and acceptors, and in aquatic environments as carriers of contaminant transport. Dynamics of trace metals are closely related to various hydrologic events such as rainfall. Storm flow triggers dramatic changes of both dissolved and particulate trace metals concentrations and affects other important environmental parameters linked to trace metal behavior such as dissolved organic carbon (DOC). To improve our understanding of behaviors of trace metals and underlying processes, water chemistry information must be collected for an adequately long period of time at higher frequency than conventional manual sampling (e.g. weekly, biweekly). In this study, we developed an automated sampling system to document the dynamics of trace metals, focusing on Fe and Mn, and DOC for a multiple-year high-frequency geochemistry time series in a small catchment, called Rivendell located at Angelo Coast Range Reserve, California. We are sampling ground and streamwater using the automated sampling system in daily-frequency and the condition of the site is substantially variable from season to season. The ranges of pH of ground and streamwater are pH 5 - 7 and pH 7.8 - 8.3, respectively. DOC is usually sub-ppm, but during rain events, it increases by an order of magnitude. The automated sampling system focuses on two aspects- 1) a modified design of sampler to improve sample integrity for trace metals and DOC and 2) remote controlling system to update sampling volume and timing according to hydrological conditions. To maintain sample integrity, the developed method employed gravity filtering using large volume syringes (140mL) and syringe filters connected to a set of polypropylene bottles and a borosilicate bottle via Teflon tubing. Without filtration, in a few days, the
High frequency of parvovirus B19 DNA in bone marrow samples from rheumatic patients
Lundqvist, Anders; Isa, Adiba; Tolfvenstam, Thomas
2005-01-01
BACKGROUND: Human parvovirus B19 (B19) polymerase chain reaction (PCR) is now a routine analysis and serves as a diagnostic marker as well as a complement or alternative to B19 serology. The clinical significance of a positive B19 DNA finding is however dependent on the type of tissue or body fluid...... analysed and of the immune status of the patient. OBJECTIVES: To analyse the clinical significance of B19 DNA positivity in bone marrow samples from rheumatic patients. STUDY DESIGN: Parvovirus B19 DNA was analysed in paired bone marrow and serum samples by nested PCR technique. Serum was also analysed...... negative group. A high frequency of parvovirus B19 DNA was thus detected in bone marrow samples in rheumatic patients. The clinical data does not support a direct association between B19 PCR positivity and rheumatic disease manifestation. Therefore, the clinical significance of B19 DNA positivity in bone...
Understanding Zipf's law of word frequencies through sample-space collapse in sentence formation
Thurner, Stefan; Hanel, Rudolf; Liu, Bo; Corominas-Murtra, Bernat
2015-01-01
The formation of sentences is a highly structured and history-dependent process. The probability of using a specific word in a sentence strongly depends on the ‘history’ of word usage earlier in that sentence. We study a simple history-dependent model of text generation assuming that the sample-space of word usage reduces along sentence formation, on average. We first show that the model explains the approximate Zipf law found in word frequencies as a direct consequence of sample-space reduction. We then empirically quantify the amount of sample-space reduction in the sentences of 10 famous English books, by analysis of corresponding word-transition tables that capture which words can follow any given word in a text. We find a highly nested structure in these transition tables and show that this ‘nestedness’ is tightly related to the power law exponents of the observed word frequency distributions. With the proposed model, it is possible to understand that the nestedness of a text can be the origin of the actual scaling exponent and that deviations from the exact Zipf law can be understood by variations of the degree of nestedness on a book-by-book basis. On a theoretical level, we are able to show that in the case of weak nesting, Zipf's law breaks down in a fast transition. Unlike previous attempts to understand Zipf's law in language the sample-space reducing model is not based on assumptions of multiplicative, preferential or self-organized critical mechanisms behind language formation, but simply uses the empirically quantifiable parameter ‘nestedness’ to understand the statistics of word frequencies. PMID:26063827
Variation in drug injection frequency among out-of-treatment drug users in a national sample.
Singer, M; Himmelgreen, D; Dushay, R; Weeks, M R
1998-05-01
This article analyzes data on drug injection frequency in a sample of more than 13,000 out-of-treatment drug injectors interviewed across 21 U.S. cities and Puerto Rico through the National Institute on Drug Abuse (NIDA) Cooperative Agreement for AIDS Community-Based Outreach/Intervention Research Program. The goals of the article are to present findings on injection frequency and to predict variation in terms of a set of variables suggested by previous research, including location, ethnicity, gender, age, educational attainment, years since first use of alcohol and marijuana, income, living arrangement, homelessness, drugs injected, and duration of injection across drugs. Three models were tested. Significant intersite differences were identified in injection frequency, although most of the other predictor variables we tested accounted for little of the variance. Ethnicity and drugs injected, however, were found to be significant. Taken together, location, ethnicity, and type of drug injected provide a configuration that differentiated and (for the variables available for the analysis) best predicted injection frequency. The public health implications of these findings are presented.
Skeffington, R. A.; Halliday, S. J.; Wade, A. J.; Bowes, M. J.; Loewenthal, M.
2015-01-01
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to one of 3 or 4 WFD classes with 95% confidence, whereas with weekly sampling this was 1 or 2 classes for the same cases. In the most extreme case, random sampling effects could result in the same water body being assigned to any of the 5 WFD quality classes. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Confining sampling to the working week compared to all seven days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value
ECCM scheme against interrupted sampling repeater jammer based on time-frequency analysis
Shixian Gong; Xizhang Wei; Xiang Li
2014-01-01
The interrupted sampling repeater jamming (ISRJ) is an effective deception jamming method for coherent radar, especial y for the wideband linear frequency modulation (LFM) radar. An electronic counter-countermeasure (ECCM) scheme is proposed to remove the ISRJ-based false targets from the pulse compres-sion result of the de-chirping radar. Through the time-frequency (TF) analysis of the radar echo signal, it can be found that the TF characteristics of the ISRJ signal are discontinuous in the pulse duration because the ISRJ jammer needs short durations to re-ceive the radar signal. Based on the discontinuous characteristics a particular band-pass filter can be generated by two alternative approaches to retain the true target signal and suppress the ISRJ signal. The simulation results prove the validity of the proposed ECCM scheme for the ISRJ.
Gupta, N.; Saro, A.; Mohr, J. J.; Benson, B. A.; Bocquet, S.; Capasso, R.; Carlstrom, J. E.; Chiu, I.; Crawford, T. M.; de Haan, T.; Dietrich, J. P.; Gangkofner, C.; Holzapfel, W. L.; McDonald, M.; Rapetti, D.; Reichardt, C. L.
2017-01-01
We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the Meta-Catalog of X-ray detected Clusters of galaxies (MCXC; = 0.14) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg2 SPT-SZ survey maps at the locations of SUMSS sources, producing a multi-frequency catalog of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev-Zel'dovich Effect (SZE) signal, which is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogs. We find that the high frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. We use the 150 GHz LF to estimate the impact of cluster radio galaxies on an SPT-SZ like survey. The radio galaxy flux typically produces a small bias on the SZE signal and has negligible impact on the observed scatter in the SZE mass-observable relation. If we assume there is no redshift evolution in the radio galaxy LF then 1.8 ± 0.7 percent of the clusters with detection significance ξ ≥ 4.5 would be lost from the sample. Allowing for redshift evolution of the form (1 + z)2.5 increases the incompleteness to 5.6 ± 1.0 percent. Improved constraints on the evolution of the cluster radio galaxy LF require a larger cluster sample extending to higher redshift.
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
Lashkari, Bahman; Mandelis, Andreas
2011-09-01
In this work, a detailed theoretical and experimental comparison between various key parameters of the pulsed and frequency-domain (FD) photoacoustic (PA) imaging modalities is developed. The signal-to-noise ratios (SNRs) of these methods are theoretically calculated in terms of transducer bandwidth, PA signal generation physics, and laser pulse or chirp parameters. Large differences between maximum (peak) SNRs were predicted. However, it is shown that in practice the SNR differences are much smaller. Typical experimental SNRs were 23.2 dB and 26.1 dB for FD-PA and time-domain (TD)-PA peak responses, respectively, from a subsurface black absorber. The SNR of the pulsed PA can be significantly improved with proper high-pass filtering of the signal, which minimizes but does not eliminate baseline oscillations. On the other hand, the SNR of the FD method can be enhanced substantially by increasing laser power and decreasing chirp duration (exposure) correspondingly, so as to remain within the maximum permissible exposure guidelines. The SNR crossover chirp duration is calculated as a function of transducer bandwidth and the conditions yielding higher SNR for the FD mode are established. Furthermore, it was demonstrated that the FD axial resolution is affected by both signal amplitude and limited chirp bandwidth. The axial resolution of the pulse is, in principle, superior due to its larger bandwidth; however, the bipolar shape of the signal is a drawback in this regard. Along with the absence of baseline oscillation in cross-correlation FD-PA, the FD phase signal can be combined with the amplitude signal to yield better axial resolution than pulsed PA, and without artifacts. The contrast of both methods is compared both in depth-wise (delay-time) and fixed delay time images. It was shown that the FD method possesses higher contrast, even after contrast enhancement of the pulsed response through filtering.
A 20 GHz Bright Sample for Delta > 72 deg - II. Multi-frequency Follow-up
Ricci, R; Verma, R; Prandoni, I; Carretti, E; Mack, K -H; Massardi, M; Procopio, P; Zanichelli, A; Gregorini, L; Mantovani, F; Gawronski, M P; Peel, M W
2013-01-01
We present follow-up observations at 5, 8 and 30 GHz of the K-band Northern Wide Survey (KNoWS) 20 GHz Bright Sample, performed with the 32-m Medicina Radio Telescope and the 32-m Torun Radio Telescope. The KNoWS sources were selected in the Northern Polar Cap (Delta > 72 deg) and have a flux density limit S(20GHz) = 115 mJy. We include NVSS 1.4 GHz measurements to derive the source radio spectra between 1.4 and 30 GHz. Based on optical identifications, 68 per cent of the sources are QSOs, and 27 per cent are radio galaxies. A redshift measurement is available for 58 per cent of the sources. The radio spectral properties of the different source populations are found to be in agreement with those of other high-frequency selected samples.
Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J
2006-03-30
The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems.
E. Contreras
2012-08-01
Full Text Available Estuaries are complex systems in which long water quality data series are not always available at the proper scale. Data proceeding from several water quality networks, with different measuring frequencies (monthly, weekly and 15 min and different numbers of sampling points, were compared throughout the main channel of the Guadalquivir estuary. Higher frequency of turbidity sampling in the upper estuary is required. In the lower estuary, sampling points help to find out the ETM, and higher frequency sampling of EC is required because of the effect of the tidal and river components. This could be a feedback for the implementation of monitoring networks in estuaries.
Multi-frequency polarimetry of a complete sample of PACO radio sources
Galluzzi, V; Bonaldi, A; Casasola, V; Gregorini, L; Trombetti, T; Burigana, C; De Zotti, G; Ricci, R; Stevens, J; Ekers, R D; Bonavera, L; Alighieri, S di Serego; Liuzzo, E; Lopez-Caniego, M; Mignano, A; Paladino, R; Toffolatti, L; Tucci, M
2016-01-01
We present high sensitivity polarimetric observations in 6 bands covering the 5.5-38 GHz range of a complete sample of 53 compact extragalactic radio sources brighter than 200 mJy at 20 GHz. The observations, carried out with the Australia Telescope Compact Array (ATCA), achieved a 91% detection rate (at 5 sigma). Within this frequency range the spectra of about 95% of sources are well fitted by double power laws, both in total intensity and in polarisation, but the spectral shapes are generally different in the two cases. Most sources were classified as either steep- or peaked-spectrum but less than 50% have the same classification in total and in polarised intensity. No significant trends of the polarisation degree with flux density or with frequency were found. The mean variability index in total intensity of steep-spectrum sources increases with frequency for a 4-5 year lag, while no significant trend shows up for the other sources and for the 8 year lag. In polarisation, the variability index, that could...
Frequency of fungi in respiratory samples from Turkish cystic fibrosis patients.
Güngör, Ozge; Tamay, Zeynep; Güler, Nermin; Erturan, Zayre
2013-03-01
An increased isolation of fungi from the respiratory tract of patients with cystic fibrosis (CF) has been reported. The prevalence of different fungi in CF patients from Turkey is not known. Our aim was to determine the frequency of fungi in the respiratory tract of Turkish CF patients. We investigated a total of 184 samples from 48 patients. Samples were inoculated on Medium B+ and CHROMagar Candida. Candida albicans was the predominant yeast isolated [30 patients (62.5%)], followed by C. parapsilosis [6 (12.5%)] and C. dubliniensis 5 (10.4%). Aspergillus fumigatus was the most common filamentous fungus [5 (10.4%)] and non-fumigatus Aspergillus species were isolated from four (8.3%) patients. Staphylococcus aureus was the most frequently detected bacterium in C. albicans positive samples (53.57%). A. fumigatus and Pseudomonas aeruginosa or S. aureus were detected together in 75% of A. fumigatus positive samples each. No statistically significant relationship was detected between growth of yeast and moulds and age, gender, the use of inhaled corticosteroids or tobramycin. No significant correlation was found between the isolation of C. albicans, A. fumigatus and P. aeruginosa, Stenotrophomonas maltophilia or S. aureus, and the isolation of C. albicans and Haemophilus influenzae. Other factors which may be responsible for the increased isolation of fungi in CF need to be investigated.
Yang, Yongheng; Zhou, Keliang; Blaabjerg, Frede
2016-01-01
the instantaneous grid information (e.g., frequency and phase of the grid voltage) for the current control, which is commonly performed by a Phase-Locked-Loop (PLL) system. Hence, harmonics and deviations in the estimated frequency by the PLL could lead to current tracking performance degradation, especially...... for the periodic signal controllers (e.g., PR and RC) with a fixed sampling rate. In this paper, the impacts of frequency deviations induced by the PLL and/or the grid disturbances on the selected current controllers are investigated by analyzing the frequency adaptability of these current controllers....... Subsequently, strategies to enhance the frequency adaptability of the current controllers are proposed for the power converters to produce high quality feed-in currents even in the presence of grid frequency deviations. Specifically, by feeding back the PLL estimated frequency to update the center frequencies...
Gallach, Xavi; Ogier, Christophe; Ravanel, Ludovic; Deline, Philip; Carcaillet, Julien
2017-04-01
Rockfalls and rock avalanches are active processes in the Mont Blanc massif, with infrastructure and alpinists at risk. Thanks to a network of observers (hut keepers, mountain guides, alpinists) set up in 2007 present rockfalls are well surveyed and documented. Rockfall frequency over the past 150 years has been studied by comparison of historical photographs, showing that it strongly increased during the three last decades, especially during hot periods like the summer of 2003 and 2015, due to permafrost degradation driven by the climate change. In order to decipher the possible relationship between rockfall occurrence and the warmest periods of the Lateglacial and the Holocene, we start to study the morphodynamics of some selected high-elevated (>3000 m a.s.l.) rockwalls of the massif on a long timescale. Contrary to low altitude, deglaciated sites where study of large rockfall deposits allows to quantify frequency and magnitude of the process, rockfalls that detached from high-elevated rockwalls are no more noticeable as debris were absorbed and evacuated by the glaciers. Therefore, our study focuses on the rockfall scars. Their 10Be dating gives us the rock surface exposure age from present to far beyond the Last Glacial Maximum, interpreted as the rockfall ages. TCN dating of rockfalls has been carried out at the Aiguille du Midi in 2007 (Boehlert et al., 2008), and three other sites in the Mont Blanc massif in 2011 (Gallach et al., submitted). Here we present a new data set of rockfall dating carried out in 2015 that improves the 2007 and 2011 data. Furthermore, a relationship between the colour of the Mont Blanc granite and its exposure age has been shown: fresh rock surface is light grey (e.g. in recent rockfall scars) whereas weathered rock surface is in the range grey to orange/red: the redder a rock surface, the older its age. Here, reflectance spectroscopy is used to quantify the granite surface colour. Böhlert, R., Gruber, S., Egli, M., Maisch, M
Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E
2014-01-01
Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.
Metzke Christa
2008-07-01
Full Text Available Abstract Background Surprisingly little is known about the frequency, stability, and correlates of school fear and truancy based on self-reported data of adolescents. Methods Self-reported school fear and truancy were studied in a total of N = 834 subjects of the community-based Zurich Adolescent Psychology and Psychopathology Study (ZAPPS at two times with an average age of thirteen and sixteen years. Group definitions were based on two behavioural items of the Youth Self-Report (YSR. Comparisons included a control group without indicators of school fear or truancy. The three groups were compared across questionnaires measuring emotional and behavioural problems, life-events, self-related cognitions, perceived parental behaviour, and perceived school environment. Results The frequency of self-reported school fear decreased over time (6.9 vs. 3.6% whereas there was an increase in truancy (5.0 vs. 18.4%. Subjects with school fear displayed a pattern of associated internalizing problems and truants were characterized by associated delinquent behaviour. Among other associated psychosocial features, the distress coming from the perceived school environment in students with school fear is most noteworthy. Conclusion These findings from a community study show that school fear and truancy are frequent and display different developmental trajectories. Furthermore, previous results are corroborated which are based on smaller and selected clinical samples indicating that the two groups display distinct types of school-related behaviour.
Wang, Wei; Zhuge, Qunbi; Morsy-Osman, Mohamed; Gao, Yuliang; Xu, Xian; Chagnon, Mathieu; Qiu, Meng; Hoang, Minh Thang; Zhang, Fangyuan; Li, Rui; Plant, David V
2014-11-03
We propose a decision-aided algorithm to compensate the sampling frequency offset (SFO) between the transmitter and receiver for reduced-guard-interval (RGI) coherent optical (CO) OFDM systems. In this paper, we first derive the cyclic prefix (CP) requirement for preventing OFDM symbols from SFO induced inter-symbol interference (ISI). Then we propose a new decision-aided SFO compensation (DA-SFOC) algorithm, which shows a high SFO tolerance and reduces the CP requirement. The performance of DA-SFOC is numerically investigated for various situations. Finally, the proposed algorithm is verified in a single channel 28 Gbaud polarization division multiplexing (PDM) RGI CO-OFDM experiment with QPSK, 8 QAM and 16 QAM modulation formats, respectively. Both numerical and experimental results show that the proposed DA-SFOC method is highly robust against the standard SFO in optical fiber transmission.
Keiderling, Michael C.; Kojima, Harry
2009-03-01
We have extended our studies on the non-classical behavior of solid ^4He contained in compound torsional oscillator (TO) cell below 1 K. Our unique TO design allows observations on the identical sample at two distinct frequencies(f1=493 and f2=1165 Hz). The sample was grown by blocked capillary method in an annular cell(id = 8.0 mm, od = 10.0 mm, height = 9.0 mm). We focus here on experiments in which the two modes are excited simultaneously. While keeping the drive of f2 mode at a very low level, the drive of f1 mode was varied from high to low levels to produce substantial variations in the non-classical rotation inertia fraction (NCRIf). When the NCRIf seen by f1 mode is reduced by 89, 91 and 94 % at 9.7, 23.5 and 56.5 mK, respectively, the NCRIf seen by f2 mode (driven at low level) is reduced by 62, 68 and 80 %. The discrepancies and their temperature dependence in the observed reductions in NCRIf are not yet understood. Similar Measurements with the roles of the drive levels of the modes reversed as well as the changes in the dissipation of the torsional oscillator during the simultaneous drive will be reported.
High-Frequency Replanning Under Uncertainty Using Parallel Sampling-Based Motion Planning
Sun, Wen; Patil, Sachin; Alterovitz, Ron
2015-01-01
As sampling-based motion planners become faster, they can be re-executed more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot’s kinematic model. We investigate and analyze high-frequency replanning (HFR), where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles. PMID:26279645
Bhattacharya, A.; Lora, J. M.; Pollen, A.; Vollmer, T.; Thomas, M.; Leithold, E. L.; Mitchell, J.; Tripati, A.
2016-12-01
contribution. Most importantly, we find that during the Last Glacial Maximum (LGM) the Great Plains may not have witnessed an increase in the incidence of tornado frequency. Acknowledgements: James Sigman, Jacob Ashford, Jason Neff and Amato Evan
Optimization of frequency quantization
Tibabishev, V N
2011-01-01
We obtain the functional defining the price and quality of sample readings of the generalized velocities. It is shown that the optimal sampling frequency, in the sense of minimizing the functional quality and price depends on the sampling of the upper cutoff frequency of the analog signal of the order of the generalized velocities measured by the generalized coordinates, the frequency properties of the analog input filter and a maximum sampling rate for analog-digital converter (ADC). An example of calculating the frequency quantization for two-tier ADC with an input RC filter.
WAVEPAL: A Software for Frequency and Wavelet Analysis of Irregularly Sampled Time Series
Lenoir, Guillaume; Crucifix, Michel
2017-04-01
WAVEPAL is based on a general theory that we have developed for the frequency and wavelet analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, that we extend to algebraic operators accommodating for the presence of a polynomial trend in the model for the data, in addition to the periodic component and the background noise. Special care is devoted to the correlation between the trend and the periodic component. This new tool is then cast into the formalism of the Welch overlapping segment averaging (WOSA) method, which is used to reduce the variance of the periodogram/scalogram. We also design a test of significance against a background noise which is a continuous autoregressive-moving-average (CARMA) Gaussian process. This widens the traditional choice of a Gaussian white or red noise process as the background noise. Estimation of CARMA parameters is performed in a Bayesian framework and relies on state of the art algorithms. We then provide algorithms computing the confidence levels for the periodogram/scalogram that fully take into account the uncertainty on the CARMA noise parameters. Alternatively, if one opts for the traditional choice of a unique set of parameters for the CARMA background noise, we develop a theory providing analytical confidence levels, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram/scalogram. The estimated signal amplitude also gives access to ridge filtering or filtering in a frequency band. Our results generalize and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. WAVEPAL is written in python2.X and is available at https://github.com/guillaumelenoir/WAVEPAL
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estima...
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estima...
Nakagawa, S.
2011-04-01
Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.
Nakagawa, Seiji
2011-04-01
Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver—the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 °C, and concurrently with x-ray CT imaging. The described split Hopkinson resonant bar test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples, and a natural rock sample.
Hovmøller, M.S.; Munk, L.; Østergård, Hanne
1995-01-01
Gene frequencies in samples of aerial populations of barley powdery mildew (Erysiphe graminis f.sp. hordei), which were collected in adjacent barley areas and in successive periods of time, were compared using mobile and stationary sampling techniques. Stationary samples were collected from trap...... by the stationary technique will mainly reflect the source varieties present in the local area, whereas samples collected by the mobile spore trap will mainly reflect sources close to the sampling route. Therefore, sampling sites as well as sampling routes should be defined such that source varieties...... plants in three periods within 1 week at a distance of more than 1000 m from the nearest barley field. At four dates within the same 8-day period, other samples were collected by a mobile spore trap along four sampling routes of a total distance of 130 km around the stationary stand of exposure...
Kim, Chung Ho; O, Joo Hyun; Chung, Yong An; Yoo, Le Ryung; Sohn, Hyung Sun; Kim, Sung Hoon; Chung, Soo Kyo; Lee, Hyoung Koo [Catholic University of Korea, Seoul (Korea, Republic of)
2006-02-15
To determine appropriate sampling frequency and time of multiple blood sampling dual exponential method with {sup 99m}Tc-DTPA for calculating glomerular filtration rate (GFR). Thirty four patients were included in this study. Three mCi of {sup 99m}Tc-DTPA was intravenously injected and blood sampling at 9 different times, 5 ml each, were done. Using the radioactivity of serum, measured by gamma counter, the GFR was calculated using dual exponential method and corrected with the body surface area. Using spontaneously chosen 2 data points of serum radioactivity, 15 collections of 2-sample GFR were calculated. And 10 collections of 3-sample GFR and 12 collections of 4-sample GFR were also calculated. Using the 9-sample GFR as a reference value, degree of agreement was analyzed with Kendall's {tau} correlation coefficients, mean difference and standard deviation. Although some of the 2-sample GFR showed high correlation coefficient, over or underestimation had evolved as the renal function change. The 10-120-240 min 3-sample GFR showed a high correlation coefficient {tau} =0.93), minimal difference (Mean{+-}SD= -1.784{+-}3.972), and no over or underestimation as the renal function changed. Th 4-sample GFR showed no better accuracy than the 3-sample GFR. Int the wide spectrum or renal function, the 10-120-240 min 3-sample GFR could be the best choice for estimating the patients' renal function.
唐治德; 徐阳阳; 赵茂; 彭一灵
2015-01-01
By applying lumped parameter circuit theory and coupled mode theory, the efficiency of wire-less power transfer system via magnetic resonant coupling was researched, and the concept of transfer effi-ciency maximum frequency was proposed when transfer efficiency is maximum. Influence of system pa-rameters and load on transfer efficiency maximum frequency and transfer efficiency were analyzed. Two coils transfer system was set up, and the relationship between the frequency and transfer efficiency, the relationship between load and transfer efficiency maximum frequency and between load and transfer effi-ciency were studied,and the relationship between distance and transfer efficiency maximum frequency and between distance and transfer efficiency were carried out. Experiments and simulation prove that: there is a transfer efficiency maximum frequency in wireless power transfer system; and this transfer efficiency maximum frequency is proportional to the load and inversely proportional to mutual inductance approxi-mately; transfer efficiency maximum frequency increases with the increase of distance; when the system work in transfer efficiency maximum frequency and the load resistance is much greater than the coil resist-ance, the transfer efficiency of wireless power transfer system is maximum.%应用集总参数和耦合模理论，研究了电磁耦合式无线电能传输系统的传输效率问题，提出了使无线电能传输系统传输效率最大的传输效率最佳频率概念，分析了传输系统参数和负载对传输效率最佳频率和传输效率的影响。制作了两线圈无线电能传输实验电路，并进行了谐振频率与传输效率的关系，负载与传输效率最佳频率及传输效率的关系，距离与传输效率最佳频率及传输效率的关系实验和仿真分析。实验和仿真分析证明了：无线电能传输系统有一个传输效率最佳频率；传输效率最佳频率近似与负载成正比，与线圈
Li, M.; Jiang, Y. S.
2014-11-01
Micro-Doppler effect is induced by the micro-motion dynamics of the radar target itself or any structure on the target. In this paper, a simplified cone-shaped model for ballistic missile warhead with micro-nutation is established, followed by the theoretical formula of micro-nutation is derived. It is confirmed that the theoretical results are identical to simulation results by using short-time Fourier transform. Then we propose a new method for nutation period extraction via signature maximum energy fitting based on empirical mode decomposition and short-time Fourier transform. The maximum wobble angle is also extracted by distance approximate approach in a small range of wobble angle, which is combined with the maximum likelihood estimation. By the simulation studies, it is shown that these two feature extraction methods are both valid even with low signal-to-noise ratio.
Groszko, Marian
2003-01-01
Electric and magnetic fields of 50 Hz from electric power devices affect not only workers, but also the general population, as these devices are also located in populated areas, hence the duality of regulations on maximum admissible intensities. This paper presents these regulations and discusses in detail the changes of 2001. Based on the Polish regulations, hygienic evaluation of electric power devices has been attempted. The Polish regulations on the 50 Hz electromagnetic fields were compared with relevant international regulations of CENELEC and the European Union recommendations. Our maximum admissible intensities have been found to conform with the international standards.
Aerts, Hugo J W L [Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht (Netherlands); Jaspers, K; Backes, Walter H, E-mail: w.backes@mumc.nl [Department of Radiology, Cardiovascular Research Institute Maastricht (CARIM), Maastricht University Medical Center (MUMC), Maastricht (Netherlands)
2011-09-07
Dynamic contrast-enhanced magnetic resonance imaging is increasingly applied for tumour diagnosis and early evaluation of therapeutic responses over time. However, the reliability of pharmacokinetic parameters derived from DCE-MRI is highly dependent on the experimental settings. In this study, the effect of sampling frequency (f{sub s}) and duration on the precision of pharmacokinetic parameters was evaluated based on system identification theory and computer simulations. Both theoretical analysis and simulations showed that a higher value of the pharmacokinetic parameter K{sup trans} required an increasing sampling frequency. For instance, for similar results, a relatively low f{sub s} of 0.2 Hz was sufficient for a low K{sup trans} of 0.1 min{sup -1}, compared to a high f{sub s} of 3 Hz for a high K{sup trans} of 0.5 min{sup -1}. For the parameter v{sub e}, a decreasing value required a higher sampling frequency. A sampling frequency below 0.1 Hz systematically resulted in imprecise estimates for all parameters. For the K{sup trans} and v{sub e} parameters, the sampling duration should be above 2 min, but durations of more than 7 min do not further improve parameter estimates.
Aerts, Hugo J. W. L.; Jaspers, K.; Backes, Walter H.
2011-09-01
Dynamic contrast-enhanced magnetic resonance imaging is increasingly applied for tumour diagnosis and early evaluation of therapeutic responses over time. However, the reliability of pharmacokinetic parameters derived from DCE-MRI is highly dependent on the experimental settings. In this study, the effect of sampling frequency (fs) and duration on the precision of pharmacokinetic parameters was evaluated based on system identification theory and computer simulations. Both theoretical analysis and simulations showed that a higher value of the pharmacokinetic parameter Ktrans required an increasing sampling frequency. For instance, for similar results, a relatively low fs of 0.2 Hz was sufficient for a low Ktrans of 0.1 min-1, compared to a high fs of 3 Hz for a high Ktrans of 0.5 min-1. For the parameter ve, a decreasing value required a higher sampling frequency. A sampling frequency below 0.1 Hz systematically resulted in imprecise estimates for all parameters. For the Ktrans and ve parameters, the sampling duration should be above 2 min, but durations of more than 7 min do not further improve parameter estimates.
Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-24
Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.
Wiwanitkit Viroj; Waenlor Weerachit
2004-01-01
Toxocara species are most common roundworms of Canidae and Felidae. Human toxocariasis develops by ingesting of embryonated eggs in contaminated soil. There is no previous report of Toxocara contamination in the soil samples from the public areas in Bangkok. For this reason our study have been carried out to examine the frequency of Toxocara eggs in public yards in Bangkok, Thailand. A total of 175 sand and clay samples were collected and examined for parasite eggs. According to this study, T...
The use of Radio Frequency Identification to track samples in bio-repositories
GRIMSON, JANE BARCLAY
2008-01-01
PUBLISHED Bio-repositories are resources for storing biological samples and data to support the discovery of biomarkers, therapeutic targets, and the underlying causes of diseases. The success of this knowledge discovery process depends critically on the quality of samples and their associated data. Biological samples are expensive to collect and store. The samples and their associated data pass through a number of processes, generally in multiple locations, from data collection from the p...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P. [ITB, Faculty of Earth Sciences and Tecnology (Indonesia); BMKG (Indonesia)
2012-06-20
The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.
Etching of Niobium Sample Placed on Superconducting Radio Frequency Cavity Surface in Ar/CL2 Plasma
Janardan Upadhyay, Larry Phillips, Anne-Marie Valente
2011-09-01
Plasma based surface modification is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. It has been proven with flat samples that the bulk Niobium (Nb) removal rate and the surface roughness after the plasma etchings are equal to or better than wet etching processes. To optimize the plasma parameters, we are using a single cell cavity with 20 sample holders symmetrically distributed over the cell. These holders serve the purpose of diagnostic ports for the measurement of the plasma parameters and for the holding of the Nb sample to be etched. The plasma properties at RF (100 MHz) and MW (2.45 GHz) frequencies are being measured with the help of electrical and optical probes at different pressures and RF power levels inside of this cavity. The niobium coupons placed on several holders around the cell are being etched simultaneously. The etching results will be presented at this conference.
Gupta, N; Mohr, J J; Benson, B A; Bocquet, S; Carlstrom, J E; Capasso, R; Chiu, I; Crawford, T M; de Haan, T; Dietrich, J P; Gangkofner, C; Holzapfel, W L; McDonald, M; Rapetti, D; Reichardt, C L
2016-01-01
We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the Meta-Catalog of X-ray detected Clusters of galaxies (MCXC; $\\langle z \\rangle = 0.14$) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg$^2$ SPT-SZ survey maps at the locations of SUMSS sources, producing a multi-frequency catalog of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev-Zel'dovich Effect (SZE) signal, which is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogs. We find that the high frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. ...
Batey, G.; Chappell, S.; Cuthbert, M. N.; Erfani, M.; Matthews, A. J.; Teleberg, G.
2014-03-01
Researchers attempting to study quantum effects in the solid-state have a need to characterise samples at very low-temperatures, and frequently in high magnetic fields. Often coupled with this extreme environment is the requirement for high-frequency signalling to the sample for electrical control or measurements. Cryogen-free dilution refrigerators allow the necessary wiring to be installed to the sample more easily than their wet counterparts, but the limited cooling power of the closed cycle coolers used in these systems means that the experimental turn-around time can be longer. Here we shall describe a sample loading arrangement that can be coupled with a cryogen-free refrigerator and that allows samples to be loaded from room temperature in a matter of minutes. The loaded sample is then cooled to temperatures ∼10 mK in ∼7 h. This apparatus is compatible with systems incorporating superconducting magnets and allows multiple high-frequency lines to be connected to the cold sample.
The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...
Morohashi, Isao; Kirigaya, Mayu; Kaneko, Yuta; Katayama, Ikufumi; Sakamoto, Takahide; Sekine, Norihiko; Kasamatsu, Akifumi; Hosako, Iwao
2016-02-01
In the recent progress in terahertz (THz) devices, various kinds of source devices, such as resonant tunneling diodes, quantum cascade lasers and so forth, have been developed. Frequency measurement of THz radiations, which can operate in high speed and at room-temperature, is important for development of high-performance THz source devices. Recently, frequency measurement using optical combs are demonstrated by several groups. In these techniques, modelocked lasers (MLLs) are used for optical comb source, so that phase-locking techniques are required in order to stabilize the repetition frequency of the MLLs. On the other hand, a modulator-based optical comb generator has high accuracy and stability in the comb spacing, which is comparable to that of microwave signal driving the modulator. Thus it is suitable for frequency measurement of THz waves. In this paper, we demonstrated frequency measurement of THz waves using a Mach-Zehnder-modulator-based flat comb generator (MZ-FCG). The frequency measurement was carried out by an electro-optic (EO) sampling method, where an optical two-tone signal extracted from the optical comb generated by the MZ-FCG was used for the probe light. A 100 GHz signal generated by a W-band frequency multiplier and the probe beam collinearly traveled through an EO crystal, and beat signals between them were measured by a combination of a balanced photodetector and a spectrum analyzer. As a result, frequency measurement of the 100 GHz wave was successfully demonstrated, in which the linewidth of the beat signal was less than 1 Hz.
von Freyberg, Jana; Studer, Bjørn; Kirchner, James
2016-04-01
Studying rapidly changing hydrochemical signals in catchments can help to improve our mechanistic understanding of their water flow pathways and travel times. For these purposes, stable water isotopes (18O and 2H) are commonly used as natural tracers. However, high-frequency isotopic analyses of liquid water samples are challenging. One must capture highly dynamic behavior with high precision and accuracy, but the lab workload (and sample storage artifacts) involved in collecting and analyzing thousands of bottled samples should also be avoided. Therefore, we have tested Picarro, Inc.'s newly developed Continuous Water Sampler Module (CoWS), which is coupled to their L2130-i Cavity Ring-Down Spectrometer to enable real-time on-line measurements of 18O and 2H in liquid water samples. We coupled this isotope analysis system to a dual-channel ion chomatograph (Metrohm AG, Herisau, Switzerland) for analysis of major cations and anions, as well as a UV-Vis spectroscopy system (s::can Messtechnik GmbH, Vienna, Austria) and electrochemical probes for characterization of basic water quality parameters. The system was run unattended for up to a week at a time in the laboratory and at a small catchment. At the field site, stream-water and precipitation samples were analyzed, alternating at sub-hourly intervals. We observed that measured isotope ratios were highly sensitive to the liquid water flow rate in the CoWS, and thus to the hydraulic head difference between the CoWS and the samples from which water was drawn. We used a programmable high-precision dosing pump to control the injection flow rate and eliminate this flow-rate artifact. Our experiments showed that the precision of the CoWS-L2130-i-system for 2-minute average values was typically better than 0.06‰ for δ18O and 0.16‰ for δ2H. Carryover effects were 1% or less between isotopically contrasting water samples for 30-minute sampling intervals. Instrument drift could be minimized through periodic analysis of
Haberstick, Brett C.; Smolen, Andrew; Williams, Redford B.; Bishop, George D.; Foshee, Vangie A.; Thornberry, Terence P; Conger, Rand; Siegler, Ilene C.; Zhang, Xiaodong; Boardman, Jason D; Frajzyngier, Zygmunt; Stallings, Michael C.; Donnellan, M. Brent; Halpern, Carolyn T.; Harris, Kathleen Mullan
2015-01-01
Genetic differences between populations are a potentially an important contributor to health disparities around the globe. As differences in gene frequencies influence study design, it is important to have a thorough understanding of the natural variation of the genetic variant(s) of interest. Along these lines, we characterized the variation of the 5HTTLPR and rs25531 polymorphisms in six samples from North America, Southeast Asia, and Africa (Cameroon) that differ in their racial and ethnic composition. Allele and genotype frequencies were determined for 24,066 participants. Results indicated higher frequencies of the rs25531 G-allele among Black and African populations as compared with White, Hispanic and Asian populations. Further, we observed a greater number of ‘extra-long’ (‘XL’) 5HTTLPR alleles than have previously been reported. Extra-long alleles occurred almost entirely among Asian, Black and Non-White Hispanic populations as compared with White and Native American populations where they were completely absent. Lastly, when considered jointly, we observed between sample differences in the genotype frequencies within racial and ethnic populations. Taken together, these data underscore the importance of characterizing the L-G allele to avoid misclassification of participants by genotype and for further studies of the impact XL alleles may have on the transcriptional efficiency of SLC6A4. PMID:25564228
Vandermeulen, Ryan A.; Mannino, Antonio; Neeley, Aimee; Werdell, Jeremy; Arnone, Robert
2017-01-01
Using a modified geostatistical technique, empirical variograms were constructed from the first derivative of several diverse remote sensing reflectance and phytoplankton absorbance spectra to describe how data points are correlated with distance across the spectra. The maximum rate of information gain is measured as a function of the kurtosis associated with the Gaussian structure of the output, and is determined for discrete segments of spectra obtained from a variety of water types (turbid river filaments, coastal waters, shelf waters, a dense Microcystis bloom, and oligotrophic waters), as well as individual and mixed phytoplankton functional types (PFTs; diatoms, chlorophytes, cyanobacteria, coccolithophores). Results show that a continuous spectrum of 5 to 7 nm spectral resolution is optimal to resolve the variability across mixed reflectance and absorbance spectra. In addition, the impact of uncertainty on subsequent derivative analysis is assessed, showing that a limit of 3 Gaussian noise (SNR 66) is tolerated without smoothing the spectrum, and 13 (SNR 15) noise is tolerated with smoothing.
Haberstick, Brett C.; Smolen, Andrew; Williams, Redford B.; Bishop, George D.; Foshee, Vangie A.; Terence P. Thornberry; Conger, Rand; Siegler, Ilene C; Zhang, Xiaodong; Boardman, Jason D; Frajzyngier, Zygmunt; Stallings, Michael C.; Donnellan, M. Brent; Halpern, Carolyn T.; Harris, Kathleen Mullan
2015-01-01
Genetic differences between populations are a potentially an important contributor to health disparities around the globe. As differences in gene frequencies influence study design, it is important to have a thorough understanding of the natural variation of the genetic variant(s) of interest. Along these lines, we characterized the variation of the 5HTTLPR and rs25531 polymorphisms in six samples from North America, Southeast Asia, and Africa (Cameroon) that differ in their racial and ethnic...
Modelling of the dielectric properties of trabecular bone samples at microwave frequency
Irastorza, Ramiro M; Carlevaro, Carlos M; Vericat, Fernando
2013-01-01
In this paper the dielectric properties of human trabecular bone are evaluated under physiological condition in the microwave range. Assuming a two components medium, simulation and experimental data are presented and discussed. A special experimental setup is developed in order to deal with inhomogeneous samples. Simulation data are obtained using finite difference time domain from a realistic sample. The bone mineral density of the samples are also measured. The simulation and experimental results of the present study suggest that there is a negative relation between bone volume fraction (BV/TV) and permittivity (conductivity): the higher the BV/TV the lower the permittivity (conductivity). This is in agreement with the recently published in vivo data. Keywords: Bone dielectric properties, Microwave tomography, Finite difference time domain.
Fernando-Juan García-Diego
2016-08-01
Full Text Available Monitoring temperature and relative humidity of the environment to which artefacts are exposed is fundamental in preventive conservation studies. The common approach in setting measuring instruments is the choice of a high sampling rate to detect short fluctuations and increase the accuracy of statistical analysis. However, in recent cultural heritage standards the evaluation of variability is based on moving average and short fluctuations and therefore massive acquisition of data in slowly-changing indoor environments could end up being redundant. In this research, the sampling frequency to set a datalogger in a museum room and inside a microclimate frame is investigated by comparing the outcomes obtained from datasheets associated with different sampling conditions. Thermo-hygrometric data collected in the Sorolla room of the Pio V Museum of Valencia (Spain were used and the widely consulted recommendations issued in UNI 10829:1999 and EN 15757:2010 standards and in the American Society of Heating, Air-Conditioning and Refrigerating Engineers (ASHRAE guidelines were applied. Hourly sampling proved effective in obtaining highly reliable results. Furthermore, it was found that in some instances daily means of data sampled every hour can lead to the same conclusions as those of high frequency. This allows us to improve data logging design and manageability of the resulting datasheets.
García-Diego, Fernando-Juan; Verticchio, Elena; Beltrán, Pedro; Siani, Anna Maria
2016-08-15
Monitoring temperature and relative humidity of the environment to which artefacts are exposed is fundamental in preventive conservation studies. The common approach in setting measuring instruments is the choice of a high sampling rate to detect short fluctuations and increase the accuracy of statistical analysis. However, in recent cultural heritage standards the evaluation of variability is based on moving average and short fluctuations and therefore massive acquisition of data in slowly-changing indoor environments could end up being redundant. In this research, the sampling frequency to set a datalogger in a museum room and inside a microclimate frame is investigated by comparing the outcomes obtained from datasheets associated with different sampling conditions. Thermo-hygrometric data collected in the Sorolla room of the Pio V Museum of Valencia (Spain) were used and the widely consulted recommendations issued in UNI 10829:1999 and EN 15757:2010 standards and in the American Society of Heating, Air-Conditioning and Refrigerating Engineers (ASHRAE) guidelines were applied. Hourly sampling proved effective in obtaining highly reliable results. Furthermore, it was found that in some instances daily means of data sampled every hour can lead to the same conclusions as those of high frequency. This allows us to improve data logging design and manageability of the resulting datasheets.
García-Diego, Fernando-Juan; Verticchio, Elena; Beltrán, Pedro; Siani, Anna Maria
2016-01-01
Monitoring temperature and relative humidity of the environment to which artefacts are exposed is fundamental in preventive conservation studies. The common approach in setting measuring instruments is the choice of a high sampling rate to detect short fluctuations and increase the accuracy of statistical analysis. However, in recent cultural heritage standards the evaluation of variability is based on moving average and short fluctuations and therefore massive acquisition of data in slowly-changing indoor environments could end up being redundant. In this research, the sampling frequency to set a datalogger in a museum room and inside a microclimate frame is investigated by comparing the outcomes obtained from datasheets associated with different sampling conditions. Thermo-hygrometric data collected in the Sorolla room of the Pio V Museum of Valencia (Spain) were used and the widely consulted recommendations issued in UNI 10829:1999 and EN 15757:2010 standards and in the American Society of Heating, Air-Conditioning and Refrigerating Engineers (ASHRAE) guidelines were applied. Hourly sampling proved effective in obtaining highly reliable results. Furthermore, it was found that in some instances daily means of data sampled every hour can lead to the same conclusions as those of high frequency. This allows us to improve data logging design and manageability of the resulting datasheets. PMID:27537886
Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme
Pierzchlewski, Jacek; Arildsen, Thomas
2015-01-01
by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...
Bias in estimating animal travel distance : the effect of sampling frequency
Rowcliffe, J. Marcus; Carbone, Chris; Kays, Roland; Kranstauber, Bart; Jansen, Patrick A.
2012-01-01
1. The distance travelled by animals is an important ecological variable that links behaviour, energetics and demography. It is usually measured by summing straight-line distances between intermittently sampled locations along continuous animal movement paths. The extent to which this approach under
Bias in estimating animal travel distance: the effect of sampling frequency
Rowcliffe, J.M.; Carbone, C.; Kays, R.; Kranstauber, B.; Jansen, P.A.
2012-01-01
1. The distance travelled by animals is an important ecological variable that links behaviour, energetics and demography. It is usually measured by summing straight-line distances between intermittently sampled locations along continuous animal movement paths. The extent to which this approach under
Frequency of Haemophilus spp. in urinary and and genital tract samples
Tatjana Marijan,
2010-02-01
Full Text Available Aim To determine the prevalence and antibiotic susceptibility of Haemophilus influenzae and H. parainfluenzae isolated from the urinary and genital tracts. Methods Identification of strains bacteria Haemophilus spp. was carried out by using API NH identifi-cation system, and antibiotic susceptibility was performed by Kirby-Bauer disk diffusion method. Results A total number of 50 (0,03% H. influenzae and 14 (0,01% H. parainfluenzae (out of 180, 415 samples were isolated from genitourinary tract. From urine samples of the girls under 15 years of age these bacteria were isolated in 13 (0,88% and two (0,13% cases, respectively, and only in one case(0,11% of the UTI in boys (H. influenzae. In persons of fertile age, it was only H. influenzae bacteria that was found in urine samples of the five women (0,04% and in three men (0,22%. As a cause of vulvovaginitis, H. influenzae was isolated in four (5,63%, and H. parainfluenzae in two (2,82% girls. In persons of fertile age, H. influenzae was isolated from 10 (0,49% smears of the cervix, and in nine (1,74% male samples. H. parainfluenzae was isolated from seven (1,36% male samples. (p<0.01. Susceptibility testing of H. influenzae and H. parainfluenzae revealed that both pathogens were signifi- cantly resistant to cotrimoxasol only (26.0% and 42.9%, respectively. Conclusion In the etiology of genitourinary infections of girls during childhood, genital infections of women in fertile age (especially in pregnant women, and men with cases of epididimytis and/or orchitis,it is important to think about this rare and demanding bacteria in terms of cultivation.
Jalón-Rojas, Isabel; Schmidt, Sabine; Sottolichio, Aldo; Bertier, Christine
2016-04-01
A unique dataset of turbidity from 7 years of continuous monitoring at six stations, distributed evenly along a 62-km long transect, is presented to discuss, for the first time, the present-day dynamics of the turbidity maximum zone (TMZ) in the Loire Estuary. This system is considered one of the largest macrotidal, hyper-turbid estuaries of the European coast, mainly as the result of intense engineering works in the last two centuries. Besides accurate TMZ tracking, from tidal to multi-annual time scales, the high temporal and spatial resolution of measurements allows us to address TMZ aspects scarcely reported in the literature on estuarine sedimentary dynamics. In the Loire Estuary, TMZ moves upstream during periods of low discharge and its upstream boundary may reach up to 62 km from the mouth. The TMZ displacement is faster during its downstream flushing by river floods than during its upstream migration by tidal pumping (respectively 1.6 km day-1 and 0.9 km day-1 during 2011). However, the expulsion of the TMZ from the upper reaches requires higher discharge levels than its installation (respective discharge thresholds of 497-1034 m3 s-1 and 300-360 m3 s-1). This is due to the presence of mobile mud remaining after the TMZ presence, as confirmed by clockwise turbidity-discharge hysteresis patterns. While the installation threshold barely varies over years, the expulsion threshold is higher during years with a more concentrated and persistent TMZ. The interannual variability of the TMZ concentration and persistence is explained by the water volume transported during the previous high discharge period and the duration of the low discharge period, respectively, as recently shown for the Gironde Estuary, leading to a better understanding of TMZ features in macrotidal estuaries. The summer-averaged river flow is introduced as a hydrological indicator of the upstream boundary of the TMZ. In the context of global change, these three discharge-based indicators of TMZ
High frequencies of antibiotic resistance genes in infants’ meconium and early fecal samples
Gosalbes, M. J.; Vallès, Y.; Jiménez-Hernández, N.
2016-01-01
The gastrointestinal tract (GIT) microbiota has been identified as an important reservoir of antibiotic resistance genes (ARGs) that can be horizontally transferred to pathogenic species. Maternal GIT microbes can be transmitted to the offspring, and recent work indicates that such transfer starts...... before birth. We have used culture-independent genetic screenings to explore whether ARGs are already present in the meconium accumulated in the GIT during fetal life and in feces of 1-week-old infants. We have analyzed resistance to β-lactam antibiotics (BLr) and tetracycline (Tcr), screening...... fecal samples and colostrum. Our results reveal a high prevalence of BLr and Tcr in both meconium and early fecal samples, implying that the GIT resistance reservoir starts to accumulate even before birth. We show that ARGs present in the mother may reach the meconium and colostrum and establish...
Takahashi, Tsuyoshi; Kawano, Yoichi; Makiyama, Kozo; Shiba, Shoichi; Sato, Masaru; Nakasha, Yasuhiro; Hara, Naoki
2017-02-01
A maximum frequency of oscillation (f max) of 1.3 THz was achieved using an extended drain-side recess structure of InAlAs/InGaAs high-electron-mobility transistors (HEMTs), although the gate length was relatively long at 75 nm. The high f max was improved by reducing the drain output conductance (g d). The use of an asymmetric gate recess structure and double-side doping above and below a channel region were effective in reducing g d. Further improvements in transconductance (g m) and g d were achieved by reducing the distance between the source and gate electrodes.
Wiwanitkit, Viroj; Waenlor, Weerachit
2004-01-01
Toxocara species are most common roundworms of Canidae and Felidae. Human toxocariasis develops by ingesting of embryonated eggs in contaminated soil. There is no previous report of Toxocara contamination in the soil samples from the public areas in Bangkok. For this reason our study have been carried out to examine the frequency of Toxocara eggs in public yards in Bangkok, Thailand. A total of 175 sand and clay samples were collected and examined for parasite eggs. According to this study, Toxocara eggs were detected from 10 (5.71%) of 175 soil samples. The high rate of contamination in this study implies the importance of the control of this possible zoonotic disease: control of abandon of dogs and cats, is still necessary.
Wiwanitkit Viroj
2004-01-01
Full Text Available Toxocara species are most common roundworms of Canidae and Felidae. Human toxocariasis develops by ingesting of embryonated eggs in contaminated soil. There is no previous report of Toxocara contamination in the soil samples from the public areas in Bangkok. For this reason our study have been carried out to examine the frequency of Toxocara eggs in public yards in Bangkok, Thailand. A total of 175 sand and clay samples were collected and examined for parasite eggs. According to this study, Toxocara eggs were detected from 10 (5.71% of 175 soil samples. The high rate of contamination in this study implies the importance of the control of this possible zoonotic disease: control of abandon of dogs and cats, is still necessary.
Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus
1992-12-01
The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated
Skeffington, R. A.; Halliday, S. J.; Wade, A. J.; Bowes, M. J.; Loewenthal, M.
2015-05-01
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest
Morteza Sadinejad
2015-01-01
Full Text Available Background: This study aims to explore the frequency of aggressive behaviors among a nationally representative sample of Iranian children and adolescents. Methods: This nationwide study was performed on a multi-stage sample of 6-18 years students, living in 30 provinces in Iran. Students were asked to confidentially report the frequency of aggressive behaviors including physical fighting, bullying and being bullied in the previous 12 months, using the questionnaire of the World Health Organization Global School Health Survey. Results: In this cross-sectional study, 13,486 students completed the study (90.6% participation rate; they consisted of 49.2% girls and 75.6% urban residents. The mean age of participants was 12.47 years (95% confidence interval: 12.29, 12.65. In total, physical fight was more prevalent among boys than girls (48% vs. 31%, P < 0.001. Higher rates of involvement in two other behaviors namely being bullied and bulling to other classmates had a higher frequency among boys compared to girls (29% vs. 25%, P < 0.001 for being bullied and (20% vs. 14%, P < 0.001 for bulling to others. Physical fighting was more prevalent among rural residents (40% vs. 39%, respectively, P = 0.61, while being bullied was more common among urban students (27% vs. 26%, respectively, P = 0.69. Conclusions: Although in this study the frequency of aggressive behaviors was lower than many other populations, still these findings emphasize on the importance of designing preventive interventions that target the students, especially in early adolescence, and to increase their awareness toward aggressive behaviors. Implications for future research and aggression prevention programming are recommended.
Waade, Ragnhild Birkeland; Molden, Espen; Martinsen, Mette Irene; Hermann, Monica; Ranhoff, Anette Hylen
2017-07-01
To determine use of psychotropic drugs and weak opioids in hip fracture patients by analysing plasma samples at admission, and compare detected drug frequencies with prescription registry data and drug records. Plasma from 250 hip fracture patients aged ≥65 years sampled at hospital admission were analysed by ultra-performance liquid chromatography-tandem mass spectrometry methods for detection of psychotropic drugs and weak opioid analgesics (alcohol also determined). Odds ratios for drugs detected in plasma of hip fracture patients vs. prescription frequencies of the same drugs in an age-, time- and region-matched reference population were calculated. Moreover, recorded and measured drugs were compared. Psychotropic drugs and/or weak opioid analgesics were detected in 158 (63%) of the patients (median age 84 years; 76% females), while alcohol was found in 19 patients (7.6%). The occurrence of diazepam (odds ratio 1.6; 95% confidence interval 1.1-2.4), nitrazepam (2.3; 1.3-4.1), selective serotonin reuptake inhibitors (1.9; 1.3-2.9) and mirtazapine (2.3; 1.2-4.3) was significantly higher in plasma samples of hip fracture patients than in prescription data from the reference population. Poor consistency between recorded and measured drugs was disclosed for z-hypnotics and benzodiazepines; e.g. diazepam was detected in 29 (11.6%), but only recorded in six (2.4%) of the patients. Plasma analysis shows that use of antidepressants and benzodiazepines in hip fracture patients is significantly more frequent than respective prescription frequencies in the general elderly population. Moreover, consistency between recorded and actual use of psychotropic fall-risk drugs is poor at hospital admission of hip fracture patients. © 2017 The British Pharmacological Society.
Moraetis, Daniel, E-mail: moraetis@mred.tuc.gr [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece); Stamati, Fotini; Kotronakis, Manolis; Fragia, Tasoula; Paranychnianakis, Nikolaos; Nikolaidis, Nikolaos P. [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece)
2011-06-15
Highlights: > Identification of hydrological and geochemical pathways within a complex watershed. > Water increased N-NO{sub 3} concentration and E.C. values during flash flood events. > Soil degradation and impact on water infiltration within the Koiliaris watershed. > Analysis of Rare Earth Elements in water bodies for identification of karstic water. - Abstract: Koiliaris River watershed is a Critical Zone Observatory that represents severely degraded soils due to intensive agricultural activities and biophysical factors. It has typical Mediterranean soils under the imminent threat of desertification which is expected to intensify due to projected climate change. High frequency hydro-chemical monitoring with targeted sampling for Rare Earth Elements (REE) analysis of different water bodies and geochemical characterization of soils were used for the identification of hydrologic and geochemical pathways. The high frequency monitoring of water chemical data highlighted the chemical alterations of water in Koiliaris River during flash flood events. Soil physical and chemical characterization surveys were used to identify erodibility patterns within the watershed and the influence of soils on surface and ground water chemistry. The methodology presented can be used to identify the impacts of degraded soils to surface and ground water quality as well as in the design of methods to minimize the impacts of land use practices.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Alejandro Said
2016-08-01
Full Text Available In this work, we present a simple algorithm to calculate automatically the Fourier spectrum of a Sinusoidal Pulse Width Modulation Signal (SPWM. Modulated voltage signals of this kind are used in industry by speed drives to vary the speed of alternating current motors while maintaining a smooth torque. Nevertheless, the SPWM technique produces undesired harmonics, which yield stator heating and power losses. By monitoring these signals without human interaction, it is possible to identify the harmonic content of SPWM signals in a fast and continuous manner. The algorithm is based in the autocorrelation function, commonly used in radar and voice signal processing. Taking advantage of the symmetry properties of the autocorrelation, the algorithm is capable of estimating half of the period of the fundamental frequency; thus, allowing one to estimate the necessary number of samples to produce an accurate Fourier spectrum. To deal with the loss of samples, i.e., the scan backlog, the algorithm iteratively acquires and trims the discrete sequence of samples until the required number of samples reaches a stable value. The simulation shows that the algorithm is not affected by either the magnitude of the switching pulses or the acquisition noise.
Ahmad, Suhail; Dalwai, Ajmal; Al-Nakib, Widad
2013-07-01
This study investigated the role of enteroviruses in sepsis-like illness among neonates in Kuwait. Serum samples from 139 consecutive neonates presenting with sepsis-like illness during a three and a half-year-period whose blood cultures were negative for bacterial pathogens were tested. Enterovirus RNA was detected by single-step reverse-transcription PCR (RT-PCR). Specific genotypes were identified by direct DNA sequencing of enteroviral genome. Serotype-specific antibodies in serum samples from some selected patients were detected by virus neutralization test using coxsackievirus B types (CBVs). All 139 neonates presented with sepsis-like illness and blood samples were uniformly negative for aerobic/anaerobic bacterial cultures. Fifty-six (40%) neonates had further complications of sepsis including carditis (n = 34) and multi-organ involvement (n = 22). Enterovirus RNA was detected by RT-PCR in 34 of 139 (24%) serum samples which is among the highest frequency reported so far in non-epidemic settings. Genotyping identified CBVs as most common enteroviruses, causing 19 of 34 (56%) enteroviral sepsis episodes in neonates. Of 34 carditis cases, 18 were positive for CBVs by serotyping including all 10 enterovirus RNA-positive samples. Only one fatality was observed due to liver failure in a neonate with hepatitis. Our data showed that enteroviruses are responsible for 24% of neonatal sepsis cases due to non-bacterial causes in Kuwait. The data indicate that enteroviruses should be considered in the differential diagnosis of sepsis-like illness among neonates, particularly those with negative blood cultures for bacterial pathogens. Copyright © 2013 Wiley Periodicals, Inc.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Mullen Michael P
2012-01-01
Full Text Available Abstract Background The central role of the somatotrophic axis in animal post-natal growth, development and fertility is well established. Therefore, the identification of genetic variants affecting quantitative traits within this axis is an attractive goal. However, large sample numbers are a pre-requisite for the identification of genetic variants underlying complex traits and although technologies are improving rapidly, high-throughput sequencing of large numbers of complete individual genomes remains prohibitively expensive. Therefore using a pooled DNA approach coupled with target enrichment and high-throughput sequencing, the aim of this study was to identify polymorphisms and estimate allele frequency differences across 83 candidate genes of the somatotrophic axis, in 150 Holstein-Friesian dairy bulls divided into two groups divergent for genetic merit for fertility. Results In total, 4,135 SNPs and 893 indels were identified during the resequencing of the 83 candidate genes. Nineteen percent (n = 952 of variants were located within 5' and 3' UTRs. Seventy-two percent (n = 3,612 were intronic and 9% (n = 464 were exonic, including 65 indels and 236 SNPs resulting in non-synonymous substitutions (NSS. Significant (P ® MassARRAY. No significant differences (P > 0.1 were observed between the two methods for any of the 43 SNPs across both pools (i.e., 86 tests in total. Conclusions The results of the current study support previous findings of the use of DNA sample pooling and high-throughput sequencing as a viable strategy for polymorphism discovery and allele frequency estimation. Using this approach we have characterised the genetic variation within genes of the somatotrophic axis and related pathways, central to mammalian post-natal growth and development and subsequent lactogenesis and fertility. We have identified a large number of variants segregating at significantly different frequencies between cattle groups divergent for calving
Good, Jacob T.; Holland, Daniel B.; Finneran, Ian A.; Carroll, P. Brandon; Kelley, Matthew J.; Blake, Geoffrey A.
2015-10-01
We present the design and capabilities of a high-resolution, decade-spanning ASynchronous OPtical Sampling (ASOPS)-based TeraHertz Time-Domain Spectroscopy (THz-TDS) instrument. Our system employs dual mode-locked femtosecond Ti:Sapphire oscillators with repetition rates offset locked at 100 Hz via a Phase-Locked Loop (PLL) operating at the 60th harmonic of the ˜80 MHz oscillator repetition rates. The respective time delays of the individual laser pulses are scanned across a 12.5 ns window in a laboratory scan time of 10 ms, supporting a time delay resolution as fine as 15.6 fs. The repetition rate of the pump oscillator is synchronized to a Rb frequency standard via a PLL operating at the 12th harmonic of the oscillator repetition rate, achieving milliHertz (mHz) stability. We characterize the timing jitter of the system using an air-spaced etalon, an optical cross correlator, and the phase noise spectrum of the PLL. Spectroscopic applications of ASOPS-THz-TDS are demonstrated by measuring water vapor absorption lines from 0.55 to 3.35 THz and acetonitrile absorption lines from 0.13 to 1.39 THz in a short pathlength gas cell. With 70 min of data acquisition, a 50 dB signal-to-noise ratio is achieved. The achieved root-mean-square deviation is 14.6 MHz, with a mean deviation of 11.6 MHz, for the measured water line center frequencies as compared to the JPL molecular spectroscopy database. Further, with the same instrument and data acquisition hardware, we use the ability to control the repetition rate of the pump oscillator to enable THz frequency comb spectroscopy (THz-FCS). Here, a frequency comb with a tooth width of 5 MHz is generated and used to fully resolve the pure rotational spectrum of acetonitrile with Doppler-limited precision. The oscillator repetition rate stability achieved by our PLL lock circuits enables sub-MHz tooth width generation, if desired. This instrument provides unprecedented decade-spanning, tunable resolution, from 80 MHz down to sub
Mohajeri, Parviz; Sharbati, Saba; Farahani, Abbas; Rezaei, Zhaleh
2016-01-01
Acinetobacter baumannii which is a Gram-negative bacterium can cause several different infections. The appearance of carbapenemase-producing A. baumannii in recent years has made the treatment process more difficult. The identification of virulence factors (VFs), such as nonadhesives in A. baumannii, helps to fight against related infections. A total of 104 samples from teaching hospitals in Kermanshah, Iran, were collected during a 24 months period (2011-2013). Sample identification was first carried out by biochemical tests, and then their susceptibility to carbapenems was determined using the Kirby-Bauer method. For confirmation of carbapenemase-producing A. baumannii, polymerase chain reaction (PCR) was done for carbapenemase-encoding genes. In addition, the frequency of nonadhesive VFs in carbapenemase-producing isolates was determined by PCR. There were 50 isolates that were identified as carbapenemase-producing A. baumannii. The PCR results showed; 40 isolates (80%) for traT, 17 isolates (34%) for cvaC, and 8 isolates (16%) for iutA, and these encode serum resistance, colicin V and aerobactin, respectively. No significant correlation was observed between these three genes. The mechanism of A. baumannii virulence has always been in question. The role of VFs has also been recognized in other Gram-negative bacteria. According to the prevalence of traT, cvaC and iutA, as nonadhesive VFs, we can suggest that they could be the main mechanism of carbapenemase-producing A. baumannii pathogenesis.
Use of High-Frequency In-Home Monitoring Data May Reduce Sample Sizes Needed in Clinical Trials.
Hiroko H Dodge
walking speed collected at baseline, 262 subjects are required. Similarly for computer use, 26 subjects are required.Individual-specific thresholds of low functional performance based on high-frequency in-home monitoring data distinguish trajectories of MCI from NC and could substantially reduce sample sizes needed in dementia prevention RCTs.
程刘胜
2015-01-01
在合理布局井下无线网络基站的基础上，提出了一种基于多载波时频迭代的最大似然TOA（Time of Arrival）估计算法，通过将小数延时不断迭代来缩小估计误差，确定合适搜索步长，实现对信号的精确TOA估计。仿真结果表明：时频迭代的最大似然TOA估计算法具有更快的收敛速度；在信噪比较小时，采用时频迭代的最大似然TOA估计算法比经典TOA估计算法有效地提高了估计精度。%The influence of underground multipath, non-line of sight and the network time synchronization accuracy cause that delayed arrival time estimation deviation is bigger in the mining UWB high accuracy position system. This paper proposes a maximum likelihood TOA estimation algorithm based on multi-carrier time-frequency iteration by rationally distributing the underground wireless base stations to conform a suitable searching step length and find the exact TOA approximation estimation to the signal via fractional delay iterated to narrow the estimation error. The result shows that the time frequency iteration TOA estimation has a faster rate of convergence than the non-iteration algorithm.
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Default Bayesian Estimation of the Fundamental Frequency
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
- and complex-valued discrete-time signals which may have missing samples or may have been sampled at a non-uniform sampling frequency. The observation model and prior distributions corre- sponding to the prior information are derived in a consistent fashion using maximum entropy and invariance arguments...
7 CFR 58.336 - Frequency of sampling for quality control of cream, butter and related products.
2010-01-01
... of sampling for quality control of cream, butter and related products. (a) Microbiological. Samples... microbiological control. (b) Composition. Sampling and testing for product composition shall be made on churns or... lipase activity. (2) Free fatty acid. This test should be made on churnings or batches from samples...
Sampling rate and aliasing on a virtual laboratory
Mihai Bogdan
2009-10-01
Full Text Available The sampling frequency determines thequality of the analog signal that is converted. Highersampling frequency achieves better conversion of theanalog signals. The minimum sampling frequencyrequired to represent the signal should at least be twicethe maximum frequency of the analog signal undertest (this is called the Nyquist rate. In the followingvirtual instrument, an example of sampling is shown.If the sampling frequency is equal or less then twicethe frequency of the input signal, a signal of lowerfrequency is generated from such a process (this iscalled aliasing.The goal of this paper is to teach students basicconcepts of sampling rate and aliasing, to becomefamiliar with this concepts.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Sanchez-Brea, Luis Miguel; Siegmann, Philip
2008-12-01
Kriging is an estimation technique that has been proved useful in image processing since it behaves, under regular sampling, as a convolution. The uncertainty obtained with kriging has also been shown to behave as a convolution for the case of regular sampling. The convolution kernel for the uncertainty exclusively depends on the spatial correlation properties of the image. In this work we obtain, first, analytical expressions for the uncertainty of 1D images with noise using this convolution procedure. Then, we use this uncertainty to propose a new criterion for determining whether a 1D image with noise is correctly sampled.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
2006-03-01
3.22 shows an equivalent plot for the secondary case (100 MHz sampled at 1 GHz) in which only widths above 8-bits can be used for phase imbalance...phase imbalance, but only above 7-bits for amplitude imbalance. For the secondary case in Figure 3.24, the use of bit-widths above 8-bits for phase
Hohoff, Carsten; Schürenkamp, Marianne; Brinkmann, Bernd
2009-05-01
Allele frequencies for the 16 short tandem repeat (STR) loci D2S1338, D3S1358, D5S818, D7S820, D8S1179, D13S317, D16S539, D18S51, D19S433, D21S11, ACTBP2, CSF1PO, FGA, TH01, TPOX and VWA were determined for 337 immigrants from Nigeria. All loci were in Hardy-Weinberg equilibrium. More than 6,000 meiotic transfers were investigated and ten mutations were observed. Single mutations were observed in the STR systems D2S1338, D3S1358, D7S820, D8S1179, D16S539 and FGA, whereas two mutations were observed in the systems D21S11 and VWA.
YANG Jun; LUO Xiao-Liang; LIU Guo-Fu; LIN Cun-Bao; WANG Yan-Ling; HU Qing-Qing; PENG Jin-Xian
2012-01-01
The feasibility of using frequency gradient analysis (FGA),a digital method based on Fourier transform,to discriminate neutrons and γ rays in the environment of an 8-bit sampling system has been investigated.The performances of most pulse shape discrimination methods in a scintillation detection system using the time-domain features of the photomultiplier tube anode signal will be lower or non-effective in this low resolution sampling system.However,the FGA method using the frequency-domain features of the anode signal exhibits a strong insensitivity to noise and can be used to discriminate neutrons and γ rays in the above sampling system.A detailed study of the quality of the FGA method in BC501A liquid scintillators is presented using a 5 G samples/s 8-bit oscilloscope and a 14.1 MeV neutron generator.A comparison of the discrimination results of the time-of-flight and conventional charge comparison (CC) methods proves the applicability of this technique.Moreover,FGA has the potential to be implemented in current embedded electronics systems to provide real-time discrimination in standalone instruments.
Rose, Martin Høyer; Bandholm, Thomas; Jensen, Bente Rona
2009-01-01
healthy young subjects (13+/-3 years, mean+/-1 S.D.) performed attempted steady isometric submaximal contractions with the ankle dorsal- and plantarflexors at two different days. Relative (ICC(3.1)) and absolute (standard error of measurement [S.E.M.], and S.E.M.%) test-retest reliability was assessed...... for the ApEn values calculated for torque time-series down-sampled to 30 and 100Hz, respectively. The relative reliability was generally moderate (0.360...
Paparó, M; Hareter, M; Guzik, J A
2016-01-01
A sequence search method was developed for searching for regular frequency spacing in delta Scuti stars by visual inspection and algorithmic search. The sample contains 90 delta Scuti stars observed by CoRoT. An example is given to represent the visual inspection. The algorithm (SSA) is described in detail. The data treatment of the CoRoT light curves, the criteria for frequency filtering and the spacings derived by two methods (three approaches: VI, SSA and FT) are given for each target. Echelle diagrams are presented for 77 targets, for which at least one sequence of regular spacing was identified. Comparing the spacing and the shifts between pairs of echelle ridges revealed that at least one pair of echelle ridges is shifted to midway between the spacing for 22 stars. The estimated rotational frequencies compared to the shifts revealed rotationally split doublets, triplets and multiplets not only for single frequencies, but for the complete echelle ridges in 31 delta Scuti stars. Using several possible ass...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Ellis, Robert J; Zhu, Bilei; Koenig, Julian; Thayer, Julian F; Wang, Ye
2015-09-01
As the literature on heart rate variability (HRV) continues to burgeon, so too do the challenges faced with comparing results across studies conducted under different recording conditions and analysis options. Two important methodological considerations are (1) what sampling frequency (SF) to use when digitizing the electrocardiogram (ECG), and (2) whether to interpolate an ECG to enhance the accuracy of R-peak detection. Although specific recommendations have been offered on both points, the evidence used to support them can be seen to possess a number of methodological limitations. The present study takes a new and careful look at how SF influences 24 widely used time- and frequency-domain measures of HRV through the use of a Monte Carlo-based analysis of false positive rates (FPRs) associated with two-sample tests on independent sets of healthy subjects. HRV values from the first sample were calculated at 1000 Hz, and HRV values from the second sample were calculated at progressively lower SFs (and either with or without R-peak interpolation). When R-peak interpolation was applied prior to HRV calculation, FPRs for all HRV measures remained very close to 0.05 (i.e. the theoretically expected value), even when the second sample had an SF well below 100 Hz. Without R-peak interpolation, all HRV measures held their expected FPR down to 125 Hz (and far lower, in the case of some measures). These results provide concrete insights into the statistical validity of comparing datasets obtained at (potentially) very different SFs; comparisons which are particularly relevant for the domains of meta-analysis and mobile health.
WU Yan-min; YAN Jie; CHEN Li-li; SUN Wei-lian; GU Zhi-yuan
2006-01-01
Objective: To detect the infection frequencies of different genotypes of Epstein-Barr virus (EBV) in subgingival samples from chronic periodontitis (CP) patients, and to discuss the correlation between infection with EBV and clinical parameters. Methods: Nested-PCR assay was used to detect EBV-1 and EBV-2 in subgingival samples from 65 CP patients, 65ivitis patients and 24 periodontally healthy individuals. The amplicons were further identified by restriction fragment length polymorphism analysis (RFLP) with endonucleases Afa I and Stu I. Clinical parameters mainly included bleeding on probing riodontally healthy individuals, the infection frequencies were 47.7% , 24.6% and 16.7% for EBV-1, and 15.4% , 7.7% and 0% for EBV-2,ectively. In 2 out of the 65 CP patients co-infection of EBV-1 and EBV-2 was found. The positive rate of EBV-1 in chronic periodontitis patients was higher than that in gingivitis patients (P=0.01) and periodontally healthy individuals (P=-0.01). But no .05) or in EBV-2frequency among the three groups (P＞0.05). In CP patients, higher mean BOP value was found in EBV-1 or EBV-2 positive egative ones (P＜0.01), but with no statistical difference in the mean PD or AL value between EBV positive and negative patients (P＞0.05). After initial periodontal treatment, 12 out of the 21 EBV-1 positive CP patients did not s a sensitive, specific and stable method to detect EBV-1 and EBV-2 in subgingival samples. Subgingival infection with EBV-1 is closely associated with chronic correlated with BOP.
Li, Jun; He, Hao; Bi, Meihua; Hu, Weisheng
2014-05-01
We propose a physical-layer energy-efficient receiving method based on selective sampling in an orthogonal frequency division multiplexing access passive optical network (OFDMA-PON). By using the special designed frame head, the receiver within an optical network unit (ONU) can identify the destination of the incoming frame. The receiver only samples at the time when the destination is in agreement with the ONU, while it stays in standby during the rest of the time. We clarify its feasibility through an experiment and analyze the downstream traffic delay by simulation. The results indicate that under limited delay conditions, ˜60% energy can be saved compared with the traditional receiving method in the OFDMA-PON system with 512 ONUs.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Connord, V; Mehdaoui, B; Tan, R P; Carrey, J; Respaud, M
2014-09-01
A setup for measuring the high-frequency hysteresis loops of magnetic samples is described. An alternating magnetic field in the range 6-100 kHz with amplitude up to 80 mT is produced by a Litz wire coil. The latter is air-cooled using a forced-air approach so no water flow is required to run the setup. High-frequency hysteresis loops are measured using a system of pick-up coils and numerical integration of signals. Reproducible measurements are obtained in the frequency range of 6-56 kHz. Measurement examples on ferrite cylinders and on iron oxide nanoparticle ferrofluids are shown. Comparison with other measurement methods of the hysteresis loop area (complex susceptibility, quasi-static hysteresis loops, and calorific measurements) is provided and shows the coherency of the results obtained with this setup. This setup is well adapted to the magnetic characterization of colloidal solutions of magnetic nanoparticles for magnetic hyperthermia applications.
RF and Surface Properties of Superconducting Samples
Junginger, T; Weingarten, W; Welsch, C
2011-01-01
At CERN a compact Quadrupole Resonator has been developed for the RF characterization of superconducting samples at different frequencies. In this paper, results from measurements on bulk niobium and niobium filmon copper substrate samples are presented. We show how different contributions to the surface resistance depend on temperature, applied RF magnetic field and frequency. Furthermore, measurements of the maximum RF magnetic field as a function of temperature and frequency in pulsed and CW operation are presented. The study is accompanied by measurements of the surface properties of the samples by various techniques.
Krishnamurthy, Krish; Hari, Natarajan
2017-09-15
The recently published CRAFT (Complete Reduction to Amplitude Frequency Table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants and phases. It offers an alternate approach to decimate time-domain data, with minimal pre-processing step. It has been shown that application of CRAFT technique to process the t1 dimension of the 2D data, significantly improved the detectable resolution by it ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. This article is protected by copyright. All rights reserved.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
单通道小样本信号频率估计算法%Frequency estimation algorithm of small sample single channel signal
粘朋雷; 李国林; 于静
2014-01-01
针对噪声条件下单通道小样本信号的频率估计问题，提出基于M USIC方法估计信号频率的算法，通过分析单通道接收信号，结合阵列信号处理方法，根据离散采样间隔和线性阵列阵元间距的关系，提出新的观测数据矩阵构造方法。利用采样数据构造一个T oeplitz矩阵，然后对该矩阵进行特征值分解得到信号子空间和噪声子空间，并通过M USIC算法实现在单通道较小采样数据量的条件下，精确地估计信号频率。最后经过计算机仿真并与快速傅里叶变换（FFT ）算法相比，验证了本文算法的有效性和优越性。%A frequency estimation algorithm based on MUSIC was prososed for signal-channel small sample signal ,which analysed the receptive signal and the relation between the discrete sample period and the array element spacing of uniform linear array (ULA) ,and then the new Toeplitz matrix could be presented .The signal subspace and noise subspace could be acquired by performing eigenvalue de-composition of the matrix ,and the M USIC algorithm could be used to estimate the signal frequency accurately on the condition of small sample through single channel .Simulation results show that the proposed method is more effective than the fast Fourier transform (FFT ) .
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Vaidya, Manushka V.; Collins, Christopher M.; Sodickson, Daniel K.; Brown, Ryan; Wiggins, Graham C.; Lattanzi, Riccardo
2016-01-01
In high field MRI, the spatial distribution of the radiofrequency magnetic (B1) field is usually affected by the presence of the sample. For hardware design and to aid interpretation of experimental results, it is important both to anticipate and to accurately simulate the behavior of these fields. Fields generated by a radiofrequency surface coil were simulated using dyadic Green’s functions, or experimentally measured over a range of frequencies inside an object whose electrical properties were varied to illustrate a variety of transmit (B1+) and receive (B1−) field patterns. In this work, we examine how changes in polarization of the field and interference of propagating waves in an object can affect the B1 spatial distribution. Results are explained conceptually using Maxwell’s equations and intuitive illustrations. We demonstrate that the electrical conductivity alters the spatial distribution of distinct polarized components of the field, causing “twisted” transmit and receive field patterns, and asymmetries between |B1+| and |B1−|. Additionally, interference patterns due to wavelength effects are observed at high field in samples with high relative permittivity and near-zero conductivity, but are not present in lossy samples due to the attenuation of propagating EM fields. This work provides a conceptual framework for understanding B1 spatial distributions for surface coils and can provide guidance for RF engineers.
Vaidya, Manushka V; Collins, Christopher M; Sodickson, Daniel K; Brown, Ryan; Wiggins, Graham C; Lattanzi, Riccardo
2016-02-01
In high field MRI, the spatial distribution of the radiofrequency magnetic ( B1) field is usually affected by the presence of the sample. For hardware design and to aid interpretation of experimental results, it is important both to anticipate and to accurately simulate the behavior of these fields. Fields generated by a radiofrequency surface coil were simulated using dyadic Green's functions, or experimentally measured over a range of frequencies inside an object whose electrical properties were varied to illustrate a variety of transmit [Formula: see text] and receive [Formula: see text] field patterns. In this work, we examine how changes in polarization of the field and interference of propagating waves in an object can affect the B1 spatial distribution. Results are explained conceptually using Maxwell's equations and intuitive illustrations. We demonstrate that the electrical conductivity alters the spatial distribution of distinct polarized components of the field, causing "twisted" transmit and receive field patterns, and asymmetries between [Formula: see text] and [Formula: see text]. Additionally, interference patterns due to wavelength effects are observed at high field in samples with high relative permittivity and near-zero conductivity, but are not present in lossy samples due to the attenuation of propagating EM fields. This work provides a conceptual framework for understanding B1 spatial distributions for surface coils and can provide guidance for RF engineers.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
赵拥军; 赵勇胜; 赵闯
2016-01-01
This paper investigates the joint estimation of Time Difference Of Arrival (TDOA) and Frequency Difference Of Arrival (FDOA) in passive location system, where the true value of the reference signal is unknown. A novel Maximum Likelihood (ML) estimator of TDOA and FDOA is constructed, and Markov Chain Monte Carlo (MCMC) method is applied to finding the global maximum of likelihood function by generating the realizations of TDOA and FDOA. Unlike the Cross Ambiguity Function (CAF) algorithm or the Expectation Maximization (EM) algorithm, the proposed algorithm can also estimate the TDOA and FDOA of non-integer multiple of the sampling interval and has no dependence on the initial estimate. The Cramer Rao Lower Bound (CRLB) is also derived. Simulation results show that, the proposed algorithm outperforms the CAF and EM algorithm for different SNR conditions with higher accuracy and lower computational complexity.%该文针对无源定位中参考信号真实值未知的时差-频差联合估计问题,构建了一种新的时差-频差最大似然估计模型,并采用马尔科夫链蒙特卡洛(MCMC)方法求解似然函数的全局极大值,得到时差-频差联合估计.算法通过生成时差-频差样本,并统计样本均值得到估计值,克服了传统互模糊函数(CAF)算法只能得到时域和频域采样间隔整数倍估计值的问题,且不存在期望最大化(EM)等迭代算法的初值依赖和收敛问题.推导了时差-频差联合估计的克拉美罗界,并通过仿真实验表明,算法在不同信噪比条件下的估计精度优于CAF算法和EM算法,且计算复杂度较低.
Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun
2016-05-31
The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.
Chong Shen
2016-05-01
Full Text Available The different noise components in a dual-mass micro-electromechanical system (MEMS gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN, electronic-thermal noise (ETN, flicker noise (FN and Coriolis signal in-phase noise (IPN. The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD and time-frequency peak filtering (TFPF. There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.
Mosbeh R. Kaloop
2016-11-01
Full Text Available Global Positioning System (GPS structural health monitoring data collection is one of the important systems in structure movement monitoring. However, GPS measurement error and noise limit the application of such systems. Many attempts have been made to adjust GPS measurements and eliminate their errors. Comparing common nonlinear methods used in the adjustment of GPS positioning for the monitoring of structures is the main objective of this study. Nonlinear Adaptive-Recursive Least Square (RLS, extended Kalman filter (EKF, and wavelet principal component analysis (WPCA are presented and applied to improve the quality of GPS time series observations. Two real monitoring observation systems for the Mansoura railway and long-span Yonghe bridges are utilized to examine suitable methods used to assess bridge behavior under different load conditions. From the analysis of the results, it is concluded that the wavelet principal component is the best method to smooth low and high GPS sampling frequency observations. The evaluation of the bridges reveals the ability of the GPS systems to detect the behavior and damage of structures in both the time and frequency domains.
Huckenbeck, W; Scheil, H G; Kuntze, K
1999-12-01
Frequency data for the STR system FGA (HumFibra) were obtained from a Caucasoid German population sample (Düsseldorf area) of 424 unrelated individuals. PCR products were detected by horizontal polyacrylamid gel electrophoresis and a total of 16 alleles was identified by side-by-side comparison with a commercially available sequenced ladder. The observed genotype distribution showed no significant deviation from the Hardy-Weinberg equilibrium. The high information content (pooled German data: rate of heterozygosity = 0.8626; probability of match = 0.0344; mean exclusion chance = 0.7240) render this system a useful tool not only in forensic casework (criminal and paternity cases) but in population genetics too.
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in\\verb+~+\\$\\backsl......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Roy, Alexis T; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J
2015-01-01
Med-El cochlear implant (CI) patients are typically programmed with either the fine structure processing (FSP) or high-definition continuous interleaved sampling (HDCIS) strategy. FSP is the newer-generation strategy and aims to provide more direct encoding of fine structure information compared with HDCIS. Since fine structure information is extremely important in music listening, FSP may offer improvements in musical sound quality for CI users. Despite widespread clinical use of both strategies, few studies have assessed the possible benefits in music perception for the FSP strategy. The objective of this study is to measure the differences in musical sound quality discrimination between the FSP and HDCIS strategies. Musical sound quality discrimination was measured using a previously designed evaluation, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this evaluation, participants were required to detect sound quality differences between an unaltered real-world musical stimulus and versions of the stimulus in which various amount of bass (low) frequency information was removed via a high-pass filer. Eight CI users, currently using the FSP strategy, were enrolled in this study. In the first session, participants completed the CI-MUSHRA evaluation with their FSP strategy. Patients were then programmed with the clinical-default HDCIS strategy, which they used for 2 months to allow for acclimatization. After acclimatization, each participant returned for the second session, during which they were retested with HDCIS, and then switched back to their original FSP strategy and tested acutely. Sixteen normal-hearing (NH) controls completed a CI-MUSHRA evaluation for comparison, in which NH controls listened to music samples under normal acoustic conditions, without CI stimulation. Sensitivity to high-pass filtering more closely resembled that of NH controls when CI users were programmed with the clinical-default FSP strategy
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
R.C. Clipes
2005-02-01
Full Text Available Avaliaram-se pastagens de capim-elefante e capim-mombaça, por intermédio de amostras de extrusa esofágica e simulação manual de pastejo, estimando-se a composição químico-bromatológica, o fracionamento dos compostos nitrogenados e carboidratos, e a digestibilidade in vitro da matéria seca. Foram utilizados 15 e 13 piquetes de capim-elefante e capim-mombaça, respectivamente, com período de ocupação de três dias. As coletas foram realizadas de forma que se obtivessem amostras relativas ao terceiro, segundo e primeiro dias de ocupação. As metodologias de amostragem foram comparadas dentro de espécie forrageira pelo teste t de Student, com arranjo em pares. Foram observados maiores teores de carboidratos totais, fibra em detergente neutro, fibra em detergente ácido, celulose, lignina e frações de lenta degradação e não degradável dos carboidratos, quando se usou a extrusa esofágica, para ambas as gramíneas. Os teores de carboidratos não-fibrosos foram superiores (PThe methods of esophageal extrusa and hand plucking sample of forage were compared to evaluate elephant grass and mombaça grass pastures, under rotational grazing. The chemical composition, the fractions of nitrogenous and carbohydrates compounds and the in vitro dry matter digestibility were evaluated. For elephant grass and mombaça grass 15 and 13 paddocks were used, respectively, with three days of occupation period and samplings were gotten in the third, second and first days of occupation period. The sampling methodologies were compared within forage species by Student’s t test, in paired arrangement. The contents of total carbohydrates, neutral detergent fiber, acid detergent fiber, cellulose, lignin the slow degradation and undegradable fractions of carbohydrates were higher (P<.05, when esophageal extrusa was used, for both grasses. The non fibrous carbohydrates were higher (P<.05 in hand plucked samples. Higher values (P<.05 were found for
RF Characterization of Superconducting Samples
Junginger, T; Welsch, C
2009-01-01
At CERN a compact Quadrupole Resonator has been re-commissioned for the RF characterization of superconducting materials at 400 MHz. In addition the resonator can also be excited at multiple integers of this frequency. Besides Rs it enables determination of the maximum RF magnetic field, the thermal conductivity and the penetration depth of the attached samples, at different temperatures. The features of the resonator will be compared with those of similar RF devices and first results will be presented.
Kelishadi, Roya; Qorbani, Mostafa; Motlagh, Mohammad Esmaeel; Ardalan, Gelayol; Moafi, Mohammad; Mahmood-Arabi, Minoosadat; Heshmat, Ramin; Jari, Mohsen
2014-01-01
Background: This study aims to assess the frequency, causes, and places of injuries in a nationally representative sample of Iranian children and adolescents, as well as the referral, places allocated for injured individuals. Methods: This nationwide study was conducted in 2011-2012 among 13486 elementary, secondary and high-school students who were selected by random cluster stratified multistage sampling from 30 provinces in Iran. The Global School-based Health Survey questionnaire of the World Health Organization was used. Results: The study participants consisted of 50.8% boys, 75.6% urban resident with a mean age of 12.5 years. Overall, 20.25% of participants reported that they were minimally injured once in the last 12 months; this prevalence was higher in boys than in girls (25.74% vs. 14.58%, respectively, P < 0.001), without significant difference in urban (20.11%) and rural (20.69%) areas. Most of them (39.92%) were injured at homes or house yards with higher prevalence in girls than in boys (48.61% vs. 35.17%, respectively, P < 0.001) and in rural than in urban areas (27.30% vs. 20.89%, respectively, P < 0.001). Schools were reported as the second prevalent site of injury occurrence (22.50%). Emergency departments and physician offices were the most prevalent referral places for injured individuals (32.31% and 22.38%, respectively). Most of the school injuries occurred during play or sport activities (45.92%). Conclusions: Prevention of unintentional injuries should be considered as a health priority. Appropriate preventive strategies should be enhanced at homes and schools. PMID:25400879
Kannan, Srimathi; Ni, Yu-Ming; Gennings, Chris; Ganguri, Harish B.; Wright, Rosalind J.
2016-01-01
Objective(s) To validate the Block98 food frequency questionnaire (FFQ) for estimating antioxidant, methyl-nutrient and polyunsaturated fatty acids (PUFA) intakes in a pregnant sample of ethnic/racial minority women in the United States (US). Methods Participants (n = 42) were from the Programming of Intergenerational Stress Mechanisms study. Total micronutrient intakes from food and supplements was ascertained using the modified Block98 FFQ and two 24-h dietary recalls collected at random on nonconsecutive days subsequent to completion of the FFQ in mid-pregnancy. Correlation coefficients (r) corrected for attenuation from within-person variation in the recalls were calculated for antioxidants (n = 7), methyl-nutrients (n = 8), and PUFAs (n = 2). Result(s) The sample was largely ethnic minorities (38 % Black, 33 % Hispanic) with 21 % being foreign born and 41 % having less than or equal to a high school degree. Significant and adequate deattenuated correlations (r ≥ 0.40) for total dietary intakes of antioxidants were observed for vitamin C, vitamin E, magnesium, and zinc. Reasonable deattenuated correlations were also observed for methyl-nutrient intakes of vitamin B6, betaine, iron, and n:6 PUFAs; however, they did not reach significance. Most women were classified into the same or adjacent quartiles (≥70 %) for total (dietary + supplements) estimates of antioxidants (5 out of 7) and methyl-nutrients (4 out of 5). Conclusions The Block98 FFQ is an appropriate dietary method for evaluating antioxidants in pregnant ethnic/minorities in the US; it may be less efficient in measuring methyl-nutrient and PUFA intakes. PMID:26511128
Gable, Sara; Chang, Yiting; Krull, Jennifer L
2007-01-01
To identify eating and activity factors associated with school-aged children's onset of overweight and persistent overweight. Data were gathered at four time points between kindergarten entry and spring of third grade. Children were directly weighed and measured and categorized as not overweight ( or =95th percentile body mass index); parents were interviewed by telephone or in person. Subjects were participants in the Early Childhood Longitudinal Study-Kindergarten Cohort, a nationally representative sample of children who entered kindergarten during 1998-1999. Children who weighed meals (OR 1.08) were more likely to be overweight for the first time at spring semester of third grade. Children who watched more television (OR 1.03), ate fewer family meals (OR 1.08), and lived in neighborhoods perceived by parents as less safe for outdoor play (OR 1.32) were more likely to be persistently overweight. Child aerobic exercise and opportunities for activity were not associated with a greater likelihood of weight problems. This study supports theories regarding the contributions of television watching, family meals, and neighborhood safety to childhood weight status. When working with families to prevent and treat childhood weight problems, food and nutrition professionals should attend to children's time spent with screen media, the frequency of family mealtimes, and parents' perceptions of neighborhood safety for children's outdoor play.
胡欣; 王健康; 刘飞; 欧连军; 梁君; 王刚; 罗积润
2016-01-01
为解决欠采样行波管功率放大器（TWTA）预失真问题，一般工程上采用查找表结合间接学习结构数字预失真方法，但是系统复杂度在一定程度上会增加。针对这种工程问题，提出了一种基于压缩感知技术的欠采样预失真技术，在简化系统实现、提高工作稳定性的同时，可以获得较好的非线性失真优化效果。%In order to improve the TWTA digital predistortion linearizer with relatively low sampling frequency, the LUT and indirect learning architecture were used, which was cumbersome. A digital predistortion linearizer with compressed sensing technology was presented, which can provide good linearity improvement with simple and stable way.
郇浩; 陶选如; 陶然; 程小康; 董朝; 李鹏飞
2014-01-01
To reach a compromise between efficient dynamic performance and high tracking accuracy of carrier tracking loop in high-dynamic circumstance which results in large Doppler frequency and Doppler frequency rate-of-change, a fast maximum likelihood estimation method of Doppler frequency rate-of-change is proposed in this paper, and the estimation value is utilized to aid the carrier tracking loop. First, it is pointed out that the maximum likelihood estimation method of Doppler frequency and Doppler frequency rate-of-change is equivalent to the Fractional Fourier Fransform (FrFT). Second, the estimation method of Doppler frequency rate-of-change, which combines the instant self-correlation and the segmental Discrete Fourier Transform (DFT) is proposed to solve the large two-dimensional search calculation amount of the Doppler frequency and Doppler frequency rate-of-change, and the received coarse estimation value is applied to narrow down the search range. Finally, the estimation value is used in the carrier tracking loop to reduce the dynamic stress and improve the tracking accuracy. Theoretical analysis and computer simulation show that the search calculation amount falls to 5.25 percent of the original amount with Signal to Noise Ratio (SNR)-30 dB, and the Root Mean Sguare Error(RMSE) of frequency tracked is only 8.46 Hz/s, compared with the traditional carrier tracking method the tracking sensitivity can be improved more than 3 dB.%高动态环境下接收信号含有较大的多普勒频率及其变化率，传统载波跟踪方法难以在高动态应力和跟踪精度两方面取得较好折中，针对这一问题该文提出一种多普勒频率变化率快速最大似然估计方法，并利用估计值辅助载波跟踪环路。首先指出了多普勒频率及其变化率的最大似然估计可等效采用分数阶傅里叶变换(FrFT)来实现；其次，针对频率及其变化率2维搜索运算量大的问题，提出一种瞬时自相关与分段离
Isogeometric design of elastic arches for maximum fundamental frequency
Nagy, A.P.; Abdalla, M.M.; Gürdal, Z.
2010-01-01
The isogeometric paradigm is aimed at unifying the geometric and analysis descriptions of engineering problems. This unification is brought about by employing the same basis functions describing the geometry to approximate the physical response. Non-uniform rational B-splines (NURBS) are commonly us
abazar pournajaf
2013-09-01
Full Text Available Background: Listeria monocytogenes is a facultative intracellular pathogen that causes listeriosis which has extensive clinical manifestations. Infections with L. monocytogenes are a serious threat to immunocompromised persons. The aim of this study was to determine the frequency of L. monocytogenes strains recovered from clinical and non-clinical samples using phenotypic methods and confirmed by PCR. Materials and Methods: In this study, 617 specimens were analyzed. All specimens were cultured in the specific PALCAM agar. Colonies were initially identified by routine biochemical tests. Finally, PCR assays using primers specific for inlA gene were performed. Results: In all, 46 (8.2% L. monocytogenes isolates were recovered from 617 specimens. Fourteen (8.2% strains, including 4 (7.5%, 2 (5.7%, 5 (14.2% and 3 (8.5% isolates were obtained from placental tissue, urine, vaginal and rectal swabs, respectively. In addition, 9 (7.4% strains of L. monocytogenes which were isolated from 107 different dairy products originated from cheese 5 (7.1%, cream 2 (10% and kashk 2 (11.7%, respectively. Among 11 (5.2% strains isolated from 210 different meat products, 5 (5.5%, 4 (7.2% and 2 (3% strains belonged to sausage, meat and poultry extracts, respectively. Finally, 12 (9.2% Listeria strains were recovered from 130 animal specimens that included 6 (10%, 4 (8% and 2 (10% strains from goat, sheep and cattle, respectively. Furthermore, all Listeria isolates (100% were found to be carriers of inlA gene in PCR assay. Conclusion: The present study showed that the clinical and non-clinical specimens were contaminated with L. monocytogenes. So, it seems necessary to use a simple and standard technique such as PCR for rapid detection of this organism from various sources.
Sundaram, B.; Gross, B.H.; Oh, E.; Mueller, N.; Myles, J.D.; Kazerooni, E.A. (Dept. of Radiology, Michigan Institute for Clinical Health Research, Univ. of Michigan Health System, Ann Arbor, Michigan (United States))
2008-10-15
Background: The accuracy of the number of high-resolution computed tomography (HRCT) images necessary to diagnose diffuse lung disease (DLD) is not well established. Purpose: To evaluate the impact of HRCT sampling frequency on reader confidence and accuracy for diagnosing DLD. Material and Methods: HRCT images of 100 consecutive patients with proven DLD were reviewed. They were: 48 usual interstitial pneumonia, 22 sarcoidosis, six hypersensitivity pneumonitis, five each of desquamative interstitial pneumonitis, eosinophilic granulomatosis, and lymphangioleiomyomatosis, and nine others. Inspiratory images at 1-cm increments throughout the lungs and three specified levels formed complete and limited examinations. In random order, three experts (readers 1, 2, and 3) ranked their top three diagnoses and rated confidence for their top diagnosis, independently and blinded to clinical information. Results: Using the complete versus limited examinations for correct first-choice diagnosis, accuracy for reader 1 (R1) was 81% versus 80%, respectively, for reader 2 (R2) 70% versus 70%, and for reader 3 (R3) 64% versus 59%. Reader accuracy within their top three choices for complete versus limited examinations was: R1 91% versus 91% of cases, respectively, R2 84% versus 83%, and R3 79% versus 72% of cases. No statistically significant differences were found between the diagnosis methods (P=0.28 for first diagnosis and P=0.17 for top three choices). The confidence intervals for individual raters showed considerable overlap, and the point estimates are almost identical. The mean interreader agreement for complete versus limited HRCT for both top and top three diagnoses were the same (moderate and fair, respectively). The mean intrareader agreement between complete and limited HRCT for top and top three diagnoses were substantial and moderate, respectively. Conclusion: Overall reader accuracy and confidence in diagnosis did not significantly differ when fewer or more HRCT images
Kinoshita, Rumiko; Shimizu, Shinichi; Taguchi, Hiroshi; Katoh, Norio; Fujino, Masaharu; Onimaru, Rikiya; Aoyama, Hidefumi; Katoh, Fumi; Omatsu, Tokuhiko; Ishikawa, Masayori; Shirato, Hiroki
2008-03-01
To evaluate the three-dimensional intrafraction motion of the breast during tangential breast irradiation using a real-time tracking radiotherapy (RT) system with a high-sampling frequency. A total of 17 patients with breast cancer who had received breast conservation RT were included in this study. A 2.0-mm gold marker was placed on the skin near the nipple of the breast for RT. A fluoroscopic real-time tumor-tracking RT system was used to monitor the marker. The range of motion of each patient was calculated in three directions. The mean +/- standard deviation of the range of respiratory motion was 1.0 +/- 0.6 mm (median, 0.9; 95% confidence interval [CI] of the marker position, 0.4-2.6), 1.3 +/- 0.5 mm (median, 1.1; 95% CI, 0.5-2.5), and 2.6 +/- 1.4 (median, 2.3; 95% CI, 1.0-6.9) for the right-left, craniocaudal, and anteroposterior direction, respectively. No correlation was found between the range of motion and the body mass index or respiratory function. The mean +/- standard deviation of the absolute value of the baseline shift in the right-left, craniocaudal, and anteroposterior direction was 0.2 +/- 0.2 mm (range, 0.0-0.8 mm), 0.3 +/- 0.2 mm (range, 0.0-0.7 mm), and 0.8 +/- 0.7 mm (range, 0.1-1.8 mm), respectively. Both the range of motion and the baseline shift were within a few millimeters in each direction. As long as the conventional wedge-pair technique and the proper immobilization are used, the intrafraction three-dimensional change in the breast surface did not much influence the dose distribution.
Conelea, Christine A; Ramanujam, Krishnapriya; Walther, Michael R; Freeman, Jennifer B; Garcia, Abbe M
2014-03-01
Stress is the contextual variable most commonly implicated in tic exacerbations. However, research examining associations between tics, stressors, and the biological stress response has yielded mixed results. This study examined whether tics occur at a greater frequency during discrete periods of heightened physiological arousal. Children with co-occurring tic and anxiety disorders (n = 8) completed two stress-induction tasks (discussion of family conflict, public speech). Observational (tic frequencies) and physiological (heart rate [HR]) data were synchronized using The Observer XT, and tic frequencies were compared across periods of high and low HR. Tic frequencies across the entire experiment did not increase during periods of higher HR. During the speech task, tic frequencies were significantly lower during periods of higher HR. Results suggest that tic exacerbations may not be associated with heightened physiological arousal and highlight the need for further tic research using integrated measurement of behavioral and biological processes.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
史海芳; 李树有; 姬永刚
2008-01-01
For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
José Antonio Luna Vera
2013-04-01
Full Text Available Se presenta un análisis de frecuencia regional con series de lluvia diaria máxima anual para una zona con escasa información. La compleja orografía de montañas y el altiplano de una región en la cordillera de Los Andes, Bolivia, produce diferentes patrones de lluvia diaria. La combinación de los Momentos-L y el análisis de conglomerados resultan adecuados para identificarlas regiones homogéneas de las series máximas anuales. El trabajo desarrollado define 4 regiones homogéneas. La región 1 comprende las estaciones ubicadas en el altiplano y la zona Sur Este. La región 2 abarca el altiplano central y la cuenca del Río La Paz, compuesto por cuencas interandinas. La 3 delimita claramente las estaciones de la zona tropical amazónica; y la 4 está compuesta por estaciones ubicadas en las montañas del Norte. Se probaron diversas distribuciones para el análisis regional de frecuencias aplicando la técnica de estaciones-año; los mejores resultados se obtuvieron con las funciones Gumbel y Doble Gumbel. Finalmente se expresan las ecuaciones regionales y se comparan con algunas series puntuales de cada región, con el objeto de verificar la aplicabilidad de la metodología propuesta para fines de diseño hidrológico.A regional frequency analysis of daily annual maximum rainfall series for an area with poor information is presented. The complex topography mountains and the highlands region in the Cordillera de Los Andes, Bolivia, produce different patterns of daily rainfall. The combination of L-Moments and cluster analysis are adequate to identify the homogeneous regions of the annual maximum series. The work defines 4 homogeneous regions. Region 1 includes the stations located in the highlands and south-east. Region 2 covers the central highlands and La Paz River Basin, consisting of inter-Andean basins. Region 3 clearly defines the Amazonian basin stations and 4 is composed of stations located in the northern mountains. Different
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
叶丰; 罗景青; 俞志富
2013-01-01
To solve the problem of low computation speed and high cost caused by A/D converter and latter processors in high-frequency section, a novel method for multiple frequencies estimation of high-frequency sinusoidal signals based on sub-Nyquist is proposed. Signals are divided into two channels, which are sampled in sub-Nyquist frequency respectively. A new signal is constructed through the two channel signals to obtain a dummy Nyquist value. Fourier transformation is used to obtain the frequency estimation. The frequency can be obtained directly when there is only one signal. Under the circumstance of multiple signals, the two channels FT results are applied to eliminating the false frequency caused by frequency crossing. Simulation results verify the effectiveness of the proposed algorithm.%为了解决高频段信号由于受到A/D转换器和后续信号处理器件运算速度和成本的限制,提出一种用欠采样获得Nyquist采样的信号处理方法.将信号分成两路进行欠采样,根据两路信号构造一个新信号,得到一个虚拟的Nyquist采样值,进行傅里叶变换,单信号时直接得到所估计信号的频率值；多信号情况下在得到各信号频率估计值的同时,会得到因频率“交叉”引起的虚假频率点,通过对双路傅氏变换的结果进行处理可以消除虚假频率点.仿真实验验证了该方法的有效性.
Dabiri, Sasan; Ghadimi, Fatemeh; Firouzifar, Mohammadreza; Yazdani, Nasrin; Mohammad-Amoli, Mahsa; Vakili, Varasteh; Mahvi, Zahra
2016-07-01
Several lines of evidence support the contribution of autoimmune mechanisms in the pathogenesis of Meniere's disease. The aim of this study was determining the association between HLA-Cw Alleles in patients with definite Meniere's disease and patients with probable Meniere's disease and a control group. HLA-Cw genotyping was performed in 23 patients with definite Meniere's disease, 24 with probable Meniere's disease, and 91 healthy normal subjects, using sequence specific primers polymerase chain reaction technique. The statistical analysis was performed using stata 8 software. There was a significant association between HLA-Cw*04 and HLA-Cw*16 in both definite and probable Meniere's disease compared to normal healthy controls. We observed a significant difference in HLA-Cw*12 frequencies between patients with definite Meniere's disease compared to patients with probable Meniere's disease (P=0.04). The frequency of HLA-Cw*18 is significantly higher in healthy controls (P=0.002). Our findings support the rule of HLA-Cw Alleles in both definite and probable Meniere's disease. In addition, differences in HLA-Cw*12 frequency in definite and probable Meniere's disease in our study's population might indicate distinct immune and inflammatory mechanisms involved in each condition.
Dabiri, Sasan; Ghadimi, Fatemeh; Firouzifar, Mohammadreza; Yazdani, Nasrin; Mohammad-Amoli, Mahsa; Vakili, Varasteh; Mahvi, Zahra
2016-01-01
Introduction Several lines of evidence support the contribution of autoimmune mechanisms in the pathogenesis of Meniere’s disease. The aim of this study was determining the association between HLA-Cw Alleles in patients with definite Meniere’s disease and patients with probable Meniere’s disease and a control group. Materials and Methods: HLA-Cw genotyping was performed in 23 patients with definite Meniere’s disease, 24 with probable Meniere’s disease, and 91 healthy normal subjects, using sequence specific primers polymerase chain reaction technique. The statistical analysis was performed using stata 8 software. Results: There was a significant association between HLA-Cw*04 and HLA-Cw*16 in both definite and probable Meniere’s disease compared to normal healthy controls. We observed a significant difference in HLA-Cw*12 frequencies between patients with definite Meniere’s disease compared to patients with probable Meniere’s disease (P=0.04). The frequency of HLA-Cw*18 is significantly higher in healthy controls (P=0.002). Conclusion: Our findings support the rule of HLA-Cw Alleles in both definite and probable Meniere’s disease. In addition, differences in HLA-Cw*12 frequency in definite and probable Meniere’s disease in our study’s population might indicate distinct immune and inflammatory mechanisms involved in each condition. PMID:27602337
Sasan Dabiri
2016-05-01
Full Text Available Introduction Several lines of evidence support the contribution of autoimmune mechanisms in the pathogenesis of Meniere’s disease. The aim of this study was determining the association between HLA-Cw Alleles in patients with definite Meniere’s disease and patients with probable Meniere’s disease and a control group. Materials and Methods: HLA-Cw genotyping was performed in 23 patients with definite Meniere’s disease, 24 with probable Meniere’s disease, and 91 healthy normal subjects, using sequence specific primers polymerase chain reaction technique. The statistical analysis was performed using stata 8 software. Results: There was a significant association between HLA-Cw*04 and HLA-Cw*16 in both definite and probable Meniere’s disease compared to normal healthy controls. We observed a significant difference in HLA-Cw*12 frequencies between patients with definite Meniere’s disease compared to patients with probable Meniere’s disease (P=0.04. The frequency of HLA-Cw*18 is significantly higher in healthy controls (P=0.002. Conclusion: Our findings support the rule of HLA-Cw Alleles in both definite and probable Meniere’s disease. In addition, differences in HLA-Cw*12 frequency in definite and probable Meniere’s disease in our study’s population might indicate distinct immune and inflammatory mechanisms involved in each condition.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Somayeh Vafaei
2013-07-01
Full Text Available AbstractBackground and objectives: Nowadays Acinetobacter baumannii is as one of the problematic opportunistic pathogens, especially in intensive care because of the incidence of drug-resistant strains in the world. The purpose of current study was to define the antibiotic susceptibility patterns and detect the prevalence of producing strains of extended-spectrum β-lactamase (ESBL in A. baumannii isolates which had been isolated from clinical samples with combined disk test.Materials and methods: This study was conducted in 3 major hospitals in Tehran on 500 clinical samples during 6 months. After identification of isolates in species level using cultural and biochemical methods, in order to determine sensitivity of 100 isolates of A. baumannii to 11 antibiotics, the susceptibility tests were carried out according to CLSI guidelines using disk diffusion method. Also MIC (Minimum inhibitory concentrations was determined for cefepime and ceftazidime, and finally to identify of producing strains of ESBL was applied phenotypic method of combined disk. Results: In this survey, 100 A. baumannii strains, 30 A. lwoffii strains and other Acinetobacter species were isolated from patients. The majority of isolates were from blood specimens. Isolates of A.baumannii revealed the highest resistance to cefepime, ceftriaxone, amikacin, imipenem, piperacillin - tazobactam, meropenem, gentamicin, tobramycina and tetracycline, respectively. Ampicillin - sulbactam and polymyxin B considered as effective drugs in this study. Multi-drug resistance in these strains was 70%. The Total isolates studied the Minimum inhibitory concentrations of ceftazidime in 84% samples was MIC ≥ 128 μg/ml and Minimum inhibitory concentrations of cefepime in 91% samples was MIC≥ 128 μg/ml. According to the results of combined disk test, 20% of total samples were demonstrated to be ESBL positive.Conclusion: Regarding to produce of ESBL in this bacterium and possibility of
Matejek, Michael S
2012-01-01
We present initial results from the first systematic survey for MgII quasar absorption lines at z > 2.5. Using infrared spectra of 46 high-redshift quasars, we discovered 111 MgII systems over a path covering 1.9 5, with a maximum of z = 5.33 - the most distant MgII system now known. The comoving MgII line density for weaker systems (Wr < 1.0A) is statistically consistent with no evolution from z = 0.4 to z = 5.5, while that for stronger systems increases three-fold until z \\sim 3 before declining again towards higher redshifts. The equivalent width distribution, which fits an exponential, reflects this evolution by flattening as z approaches 3 before steepening again. The rise and fall of the strong absorbers suggests a connection to the star formation rate density, as though they trace galactic outflows or other byproducts of star formation. The weaker systems' lack of evolution does not fit within this interpretation, but may be reproduced by extrapolating low redshift scaling relations between host ga...
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Hamed Molaabaszadeh
2014-01-01
Full Text Available ABSTRACT Background and objective: Todays, the resistance to antibiotics among of pathogen bacteria is one of the main concerns of doctors all around the world, with consideration to different reports about Staphylococcus aureus bacteria’s sensitivity, this study was done to examine the pattern of sensitivity and antibiotic resistance of Staphylococcus aureus strains collected from clinical samples of patients hospitalized in Tehran’s Araad hospital. Materials and methods: In this descriptive examination, after extracting Staphylococcus aureus derivations from clinical samples (urine, catheter, phlegm, wound, bronchial and blood, their sensitivity was measured using standard Kirby-Bauer test, in contract with following antibiotics Amikacin, Ciprofloxacin, Vancomycin, Imipenem, Sulfametoxazole Trimetoprime, Tetracycline, Oxacillin, Ceftriaxone and Penicillin. Results: In this study 260 samples of Staphylococcus aureus isolated from clinical specimens in three years. The most sensivity was to Vancomycin and the most resistance was to Penicillin and Oxacillin. Conclusion: The results of this study are indicating that Staphylococcus aureus strains resistance has increased against Penicillin and Oxacillin; presumably it is due to excessive consumption of these antibiotics. It is obvious that, with regard to increasing consumption of antibiotics and consequently, augmentation of antibacterial resistance, control of this resistance factor is necessary and inevitable, so it is recommended to avoid unnecessary usage of antibiotics.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Arpaia, Pasquale; De Vito, Luca; Girone, Mario; Pezzetti, Marco
2016-01-01
A model-based method for fault detection and early-stage isolation, applicable when unfaulty conditions can be identified only by a reduced number of trials (even only one), is presented. The basic idea is to model analytically the uncertainty of the unfaulty frequency response and express the fault condition in terms of the noise power variance. A preliminary fault isolation is carried out by sensitivity analysis in order to identify the most influencing model parameters and assess their influence on the estimated noise. Then, during maintenance tests, the noise power is checked to detect the faulty condition. This technique is conceived to check the quality of a critical component in an experimental installation (fault detection and early-stage isolation), as well as to detect its faulty dynamic behaviors over a long horizon maintenance test campaign (condition monitoring). The method was applied to four cold compressors with active magnetic bearings at CERN by proving to be able to detect an actual faulty condition in one of such compressors.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
Selva, J
2011-01-01
This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Acoustic resonance frequency locked photoacoustic spectrometer
Pilgrim, Jeffrey S.; Bomse, David S.; Silver, Joel A.
2003-09-09
A photoacoustic spectroscopy method and apparatus for maintaining an acoustic source frequency on a sample cell resonance frequency comprising: providing an acoustic source to the sample cell, the acoustic source having a source frequency; repeatedly and continuously sweeping the source frequency across the resonance frequency at a sweep rate; and employing an odd-harmonic of the source frequency sweep rate to maintain the source frequency sweep centered on the resonance frequency.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Decomposition of spectra using maximum autocorrelation factors
Larsen, Rasmus
2001-01-01
into classification or regression type analyses. A featured method for low dimensional representation of multivariate datasets is Hotellings principal components transform. We will extend the use of principal components analysis incorporating new information into the algorithm. This new information consists......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....
黄鹂
2013-01-01
There are many problems in sample number (or frequency) distribution histograms in Chinese sci-tech journals,such as unclear coordinates,non-distinguishing frequency and sample number and unclear sorting and diagram titles as well as nonstandard diagram shapes.Based on GB/T 3358.1-2009 "Statistics vocabulary and symbols—Part 1:General statistical terms and terms used in probability " and relevant documented information,some cases and methods of editing such histograms are provided in the paper.%针对科技期刊论文中频数(或频率)分布直方图坐标标目不清,频率、频数不分,分组区间界定不明,图题笼统,图的形状不规范等问题,根据GB/T 3358.1-2009《统计学词汇及符号第1部分:一般统计术语与用于概率的术语》以及有关文献中的内容,提出了规范化的直方图编辑加工方法并给出了应用实例.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
The robustness of recombination frequency estimates in intercrosses with dominant markers.
Säll, T; Nilsson, N O
1994-06-01
The robustness of the maximum likelihood estimates of recombination frequencies has been investigated in double intercrosses with complete dominance at both loci. The robustness was investigated with respect to bias in the recombination frequency estimates due to: (1) limited sample sizes, (2) heterogeneity in recombination frequencies between sexes or among meioses and (3) factors that distort the segregation-misclassification or differential viability. In the coupling phase, the recombination frequency estimates are quite robust with respect to most of the investigated factors. Potentially, the most serious cause of a bias is misclassifications, which tend to increase the recombination frequency estimates. In the repulsion phase, misclassifications are particularly serious, leading to extreme discrepancies between true and observed values. In addition, limited sample size and sex differences in recombination can also bias recombination frequency estimates in repulsion. These effects may pose serious problem in genetic mapping with random amplified polymorphic DNA (RAPD) markers.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Variable frequency iteration MPPT for resonant power converters
Zhang, Qian; Bataresh, Issa; Chen, Lin
2015-06-30
A method of maximum power point tracking (MPPT) uses an MPPT algorithm to determine a switching frequency for a resonant power converter, including initializing by setting an initial boundary frequency range that is divided into initial frequency sub-ranges bounded by initial frequencies including an initial center frequency and first and second initial bounding frequencies. A first iteration includes measuring initial powers at the initial frequencies to determine a maximum power initial frequency that is used to set a first reduced frequency search range centered or bounded by the maximum power initial frequency including at least a first additional bounding frequency. A second iteration includes calculating first and second center frequencies by averaging adjacent frequent values in the first reduced frequency search range and measuring second power values at the first and second center frequencies. The switching frequency is determined from measured power values including the second power values.
Duel frequency echo data acquisition system for sea-floor classification
Navelkar, G.S.; Desai, R.G.P.; Chakraborty, B.
An echo data acquisition system is designed to digitize echo signal from a single beam shipboard echo-sounder for use in sea-floor classification studies using a 12 bit analog to digital (A/D) card with a maximum sampling frequency of 1 MHz. Both 33...
Gabriela V. Müller
2009-03-01
maximum frequency of Generalized Frost (GF occurrence over the center-east of Argentina, known as Wet Pampa (WP, acts as a Rossby wave source which generate waves that propagate towards the South American continent favoring the frost events. The wave propagation pattern obtained from simulations using a Global Baroclinic Model shows wavenumber 3 dominance. Additional, upper and lower levels meridional wind correlations during the GF events selected were analyzed. The 250hPa global meridional wind shows a significant correlation (0.9 with the meridional wind at the WP region. The wave propagation pattern observed in this case agrees with that simulated by the model when a heating source is located at the Pacific tropical ocean. Also, significant correlation values were found for the low level southern winds at the WP region. The wave pattern simulated shows a good correlation between the hemispheric meridional wind at higher levels and the air temperature in the day of GF events.
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
徐清华; 林茂六; 张亦弛
2011-01-01
研究提出了一种通过测量等效采样示波器幅度响应来获得其相位响应的算法.此方法是基于Kramers-Kronig关系式并配以基于NTN校准的相位响应测量实现的.实例证明:虽然Kramers-Kronig变换计算时的截断造成了相位估计上非常大的误差,但是此误差可以通过少量的正交基函数拟合出来.作为一个实例,本文利用NTN校准和扫频法校准的结果获得了等效采样示波器的复频率响应,在1 MHz～50 GHz频率范围内,其频率分辨率达到了1 MHz.重构出的相位响应标准不确定度小于0.6°.结果表明:此方法可以重构出等效采样示波器任意频率栅复频率响应.%Describes an algorithm for determining the minimum phase response of equivalent sampling oscilloscopes.The procedure is based on Kramers-Kronig relation in combination with the harmonic phase response measurement of NTN calibration. Although the truncation of Kramers-Kronig transform gives rise to large errors in estimated phase,these errors can be fitted using three basis functions. As an example, we got the phase response reconstruction of sampling oscilloscope in frequency domain; its frequency resolution achieves 1MHz in the frequency range from 1 MHz ～ 50 GHz, and the phase uncertainty of the phase response is less than 0.6°. These results are based on the data obtained using an NTN technique in combination with swept-sine calibration procedure. The above results show that the algorithm can be used to obtain fine phase response of equivalent sampling oscilloscopes.
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
New downshifted maximum in stimulated electromagnetic emission spectra
Sergeev, Evgeny; Grach, Savely
A new spectral maximum in spectra of stimulated electromagnetic emission of the ionosphere (SEE, [1]) was detected in experiments at the SURA facility in 2008 for the pump frequencies f0 4.4-4.5 MHz, most stably for f0 = 4.3 MHz, the lowest possible pump frequency at the SURA facility. The new maximum is situated at frequency shifts ∆f -6 kHz from the pump wave frequency f0 , ∆f = fSEE - f0 , somewhat closer to the f0 than the well known [2,3] Downshifted Maximum in the SEE spectrum at ∆f -9 kHz. The detection and detailed study of the new feature (which we tentatively called the New Downshifted Maximum, NDM) became possible due to high frequency resolution in spectral analysis. The following properties of the NDM are established. (i) The NDM appears in the SEE spectra simultaneously with the DM and UM features after the pump turn on (recall that the less intensive Upshifted Maximum, UM, is situated at ∆f +(6-8) kHz [2,3]). The NDM can't be attributed to 1 DM [4] or Narrow Continuum Maximum (NCM, 2 [5]) SEE features, as well as to splitted DM near gyroharmonics [2]. (ii) The NDM is observed as prominent feature for maximum pump power of the SURA facility P ≈ 120 MW ERP, for which the DM is almost covered by the Broad Continuum SEE feature [2,3]. For P ˜ 30-60 MW ERP the DM and NDM have comparable intensities. For the lesser pump power the DM prevails in the SEE spectrum, while the NDM becomes invisible being covered by the thermal Narrow Continuum feature [2]. (iii) The NDM is exactly symmetrical for the UM relatively to f0 when the former one is observed, although the UM frequency offset increases up to ∆fUM ≈ +9 kHz with a decrease of the pump power up to P ≈ 4 MW ERP. The DM formation in the SEE spectrum is attributed to a three-wave interaction between the upper and lower hybrid waves in the ionosphere, and the lower hybrid frequency ( 7 kHz) determines the frequency offset of the DM high frequency flank [2,6]. The detection of the NDM with
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Riehle, Fritz
2006-01-01
Of all measurement units, frequency is the one that may be determined with the highest degree of accuracy. It equally allows precise measurements of other physical and technical quantities, whenever they can be measured in terms of frequency.This volume covers the central methods and techniques relevant for frequency standards developed in physics, electronics, quantum electronics, and statistics. After a review of the basic principles, the book looks at the realisation of commonly used components. It then continues with the description and characterisation of important frequency standards
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars
2016-10-01
The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative
Drago, Salvatore; Sebastiano, Fabio; Leenaerts, Dominicus Martinus Wilhelmus; Breems, Lucien Johannes; Nauta, Bram
2016-01-01
A low power frequency synthesiser circuit (30) for a radio transceiver, the synthesiser circuit comprising: a digital controlled oscillator configured to generate an output signal having a frequency controlled by an input digital control word (DCW); a feedback loop connected between an output and an
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
Breakfast frequency among adolescents
Pedersen, Trine Pagh; Holstein, Bjørn E; Damsgaard, Mogens Trab
2016-01-01
, quality of family communication and family support. Further, analyses suggested that the associations were more pronounced among girls, immigrants and adolescents from other family structure than traditional. The study highlights the importance of the family setting in promoting regular breakfast......OBJECTIVE: To investigate (i) associations between adolescents' frequency of breakfast and family functioning (close relations to parents, quality of family communication and family support) and (ii) if any observed associations between breakfast frequency and family functioning vary...... (n 3054) from a random sample of forty-one schools. RESULTS: Nearly one-quarter of the adolescents had low breakfast frequency. Low breakfast frequency was associated with low family functioning measured by three dimensions. The OR (95 % CI) of low breakfast frequency was 1·81 (1·40, 2...
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Estimating the exceedance probability of extreme rainfalls up to the probable maximum precipitation
Nathan, Rory; Jordan, Phillip; Scorah, Matthew; Lang, Simon; Kuczera, George; Schaefer, Melvin; Weinmann, Erwin
2016-12-01
If risk-based criteria are used in the design of high hazard structures (such as dam spillways and nuclear power stations), then it is necessary to estimate the annual exceedance probability (AEP) of extreme rainfalls up to and including the Probable Maximum Precipitation (PMP). This paper describes the development and application of two largely independent methods to estimate the frequencies of such extreme rainfalls. One method is based on stochastic storm transposition (SST), which combines the "arrival" and "transposition" probabilities of an extreme storm using the total probability theorem. The second method, based on "stochastic storm regression" (SSR), combines frequency curves of point rainfalls with regression estimates of local and transposed areal rainfalls; rainfall maxima are generated by stochastically sampling the independent variates, where the required exceedance probabilities are obtained using the total probability theorem. The methods are applied to two large catchments (with areas of 3550 km2 and 15,280 km2) located in inland southern Australia. Both methods were found to provide similar estimates of the frequency of extreme areal rainfalls for the two study catchments. The best estimates of the AEP of the PMP for the smaller and larger of the catchments were found to be 10-7 and 10-6, respectively, but the uncertainty of these estimates spans one to two orders of magnitude. Additionally, the SST method was applied to a range of locations within a meteorologically homogenous region to investigate the nature of the relationship between the AEP of PMP and catchment area.
Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.
2016-03-01
Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG
Maximum likelihood characterization of rotationally symmetric distributions on the sphere
Duerinckx, Mitia; Ley, Christophe
2012-01-01
A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...
Maximum likelihood characterization of rotationally symmetric distributions on the sphere
Duerinckx, Mitia; Ley, Christophe
2012-01-01
A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Maximum entropy models of ecosystem functioning
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Speed Estimation in Geared Wind Turbines Using the Maximum Correlation Coefficient
Skrimpas, Georgios Alexandros; Marhadi, Kun S.; Jensen, Bogi Bech;
2015-01-01
to overcome the above mentioned issues. The high speed stage shaft angular velocity is calculated based on the maximum correlation coefficient between the 1 st gear mesh frequency of the last gearbox stage and a pure sinus tone of known frequency and phase. The proposed algorithm utilizes vibration signals...
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
Saarinen, Juha J.; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Evans, Alistair R.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Sibly, Richard M.; Stephens, Patrick R.; Theodor, Jessica; Uhen, Mark D.; Smith, Felisa A.
2014-01-01
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing. PMID:24741007
Saarinen, Juha J; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D; Smith, Felisa A
2014-06-07
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing.
Tsai, Tsung-Han; Zhou, Chao; Adler, Desmond C; Fujimoto, James G
2009-11-09
We demonstrate a frequency comb (FC) swept laser and a frequency comb Fourier domain mode locked (FC-FDML) laser for applications in optical coherence tomography (OCT). The fiber-based FC swept lasers operate at a sweep rate of 1kHz and 120kHz, respectively over a 135nm tuning range centered at 1310nm with average output powers of 50mW. A 25GHz free spectral range frequency comb filter in the swept lasers causes the lasers to generate a series of well defined frequency steps. The narrow bandwidth (0.015nm) of the frequency comb filter enables a approximately -1.2dB sensitivity roll off over approximately 3mm range, compared to conventional swept source and FDML lasers which have -10dB and -5dB roll offs, respectively. Measurements at very long ranges are possible with minimal sensitivity loss, however reflections from outside the principal measurement range of 0-3mm appear aliased back into the principal range. In addition, the frequency comb output from the lasers are equally spaced in frequency (linear in k-space). The filtered laser output can be used to self-clock the OCT interference signal sampling, enabling direct fast Fourier transformation of the fringe signals, without the need for fringe recalibration procedures. The design and operation principles of FC swept lasers are discussed and designs for short cavity lasers for OCT and interferometric measurement applications are proposed.
Implementation of the Maximum Entropy Method for Analytic Continuation
Levy, Ryan; Gull, Emanuel
2016-01-01
We present $\\texttt{Maxent}$, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv2 and extensively documented. This paper shows the use of the programs in detail.
Implementation of the maximum entropy method for analytic continuation
Levy, Ryan; LeBlanc, J. P. F.; Gull, Emanuel
2017-06-01
We present Maxent, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv3 and extensively documented. This paper shows the use of the programs in detail.
2016-01-26
This software provides a means for computing only the largest few entries of the product of two matrices, both exactly and approximately (using randomized sampling techniques). The purpose of the code is to demonstrate both the time it takes to solve the problem as well as the accuracy of the approximate approach. It is also meant to serve as a foundation to test the applicability of the sampling technique to related problems in data mining, including maximum inner product search, nearest neighbor search, and maximum cosine similarity.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Bittel, R.; Mancel, J. [Commissariat a l' Energie Atomique, 92 - Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire
1968-10-01
The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [French] Les vecteurs essentiels de la contamination radioactive de l'homme sont les aliments dans leur ensemble, et non seulement l'eau ingeree ou l'air inhale. C'est pourquoi, en accord avec l'esprit des recentes recommandations de la C.I.P.R., il est propose de substituer aux CMA la notion de niveaux limites de contamination des eaux. Dans le cas des chaines alimentaires aquatiques (organismes aquatiques et aliments irrigues), la connaissance des quantites ingerees et celle des facteurs de concentration aliments/eau permettent de determiner ces niveaux limites dans le cas de deux vecteurs primaires de contamination (eaux continentales et eaux oceaniques). Les notions de regime alimentaire critique, de radioelement critique et de formule de rejets sont envisagees, dans le meme esprit, avec le souci de tenir compte le plus possible des situations locales. (auteurs)
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Vector control structure of an asynchronous motor at maximum torque
Chioncel, C. P.; Tirian, G. O.; Gillich, N.; Raduca, E.
2016-02-01
Vector control methods offer the possibility to gain high performance, being widely used. Certain applications require an optimum control in limit operating conditions, as, at maximum torque, that is not always satisfied. The paper presents how the voltage and the frequency for an asynchronous machine (ASM) operating at variable speed are determinate, with an accent on the method that keeps the rotor flux constant. The simulation analyses consider three load types: variable torque and speed, variable torque and constant speed, constant torque and variable speed. The final values of frequency and voltage are obtained through the proposed control schemes with one controller using the simulation language based on the Maple module. The dynamic analysis of the system is done for the case with P and PI controller and allows conclusions on the proposed method, which can have different applications, as the ASM in wind turbines.
Maximum bandwidth snapshot channeled imaging polarimeter with polarization gratings
LaCasse, Charles F.; Redman, Brian J.; Kudenov, Michael W.; Craven, Julia M.
2016-05-01
Compact snapshot imaging polarimeters have been demonstrated in literature to provide Stokes parameter estimations for spatially varying scenes using polarization gratings. However, the demonstrated system does not employ aggressive modulation frequencies to take full advantage of the bandwidth available to the focal plane array. A snapshot imaging Stokes polarimeter is described and demonstrated through results. The simulation studies the challenges of using a maximum bandwidth configuration for a snapshot polarization grating based polarimeter, such as the fringe contrast attenuation that results from higher modulation frequencies. Similar simulation results are generated and compared for a microgrid polarimeter. Microgrid polarimeters are instruments where pixelated polarizers are superimposed onto a focal plan array, and this is another type of spatially modulated polarimeter, and the most common design uses a 2x2 super pixel of polarizers which maximally uses the available bandwidth of the focal plane array.
Perkell, J S; Hillman, R E; Holmberg, E B
1994-08-01
In previous reports, aerodynamic and acoustic measures of voice production were presented for groups of normal male and female speakers [Holmberg et al., J. Acoust. Soc. Am. 84, 511-529 (1988); J. Voice 3, 294-305 (1989)] that were used as norms in studies of voice disorders [Hillman et al., J. Speech Hear. Res. 32, 373-392 (1989); J. Voice 4, 52-63 (1990)]. Several of the measures were extracted from glottal airflow waveforms that were derived by inverse filtering a high-time-resolution oral airflow signal. Recently, the methods have been updated and a new study of additional subjects has been conducted. This report presents previous (1988) and current (1993) group mean values of sound pressure level, fundamental frequency, maximum airflow declination rate, ac flow, peak flow, minimum flow, ac-dc ratio, inferred subglottal air pressure, average flow, and glottal resistance. Statistical tests indicate overall group differences and differences for values of several individual parameters between the 1988 and 1993 studies. Some inter-study differences in parameter values may be due to sampling effects and minor methodological differences; however, a comparative test of 1988 and 1993 inverse filtering algorithms shows that some lower 1988 values of maximum flow declination rate were due at least in part to excessive low-pass filtering in the 1988 algorithm. The observed differences should have had a negligible influence on the conclusions of our studies of voice disorders.
Acoustic Imaging Frequency Dynamics of Ferroelectric Domains by Atomic Force Microscopy
ZHAO Kun-Yu; Shunji Takekawa; Kenji Kitamura; ZENG Hua-Rong; SONG Hong-Zhang; HUI Sen-Xing; LI Guo-Rong; YIN Qing-Rui; Kiyoshi Shimamura; Chinna Venkadasamy Kannan; Encarnacion Antonia Garcia Villora
2008-01-01
We report the acoustic imaging frequency dynamics of ferroelectric domains by low-frequency acoustic probe microscopy based on the commercial atomic force microscopy. It is found that ferroelectric domain could be firstly visualized at lower frequency down to 0.h kHz by AFM-based acoustic microscopy. The frequency-dependent acoustic signal revealed a strong acoustic response in the frequency range from 7 kHz to lO kHz, and reached maximum at 8.1 kHz. The acoustic contrast mechanism can be ascribed to the different elastic response of ferroelectric microstructures to local elastic stress fields, which is induced by the acoustic wave transmitting in the sample when the piezoelectric transducer is vibrating and exciting acoustic wave under ac electric fields due to normal piezoelectric effects.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Broadband antenna with frequency scanning
A. A. Shekaturin
2014-06-01
Full Text Available Relevance of this study. The main advantage of frequency scanning is simplicity of implementation. At this point, multifunctional usage of microwave modules is an urgent task, as well as their maximum simpler and cheaper. Antenna design and operation. The study is aimed at providing electric antenna with frequency scanning. It was based on the log-periodic antenna due to its wideband and negotiation capability over the entire operating frequency range. For this distribution line is bent in an arc of a circle in a plane blade while vibrators are arranged along the radius. Computer modeling of antennas with frequency scanning. Modeled with a non-mechanical motion antenna beam emitters representing system for receiving a radio frequency signal on mobile objects calculated for 1.8 GHz ... 4.2 GHz. The simulation was performed in a software environment for numerical modeling of electromagnetic «Feko 5.5». Analysis of the interaction of radiation is based on the method of moments. Findings. The result of this work is to propose a new design of the antenna with a frequency scanning method as agreed in a wide frequency range. In the studied technical solution provided by the rotation of NAM in the frequency range, and the matching of the antenna to the feed line is maintained. Application of this type of antennas on the proposed technical solution in communication systems will improve the communication reliability by maintaining coordination in the frequency range
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Maximum entropy inference of seabed attenuation parameters using ship radiated broadband noise.
Knobles, D P
2015-12-01
The received acoustic field generated by a single passage of a research vessel on the New Jersey continental shelf is employed to infer probability distributions for the parameter values representing the frequency dependence of the seabed attenuation and the source levels of the ship. The statistical inference approach employed in the analysis is a maximum entropy methodology. The average value of the error function, needed to uniquely specify a conditional posterior probability distribution, is estimated with data samples from time periods in which the ship-receiver geometry is dominated by either the stern or bow aspect. The existence of ambiguities between the source levels and the environmental parameter values motivates an attempt to partially decouple these parameter values. The main result is the demonstration that parameter values for the attenuation (α and the frequency exponent), the sediment sound speed, and the source levels can be resolved through a model space reduction technique. The results of this multi-step statistical inference developed for ship radiated noise is then tested by processing towed source data over the same bandwidth and source track to estimate continuous wave source levels that were measured independently with a reference hydrophone on the tow body.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
HF Detecting Radar and Communication Frequency Selection System
无
2000-01-01
Real time communication (RTC) frequency selecting system is used to the maximum usable frequency (MUF) between two communication points, then finds the best frequency between 0. 85 MUF and 1.0MUF. Determination of electric wave delay is mostly introduced, and of MUF values, the form of frequencycontrolling code and relative interface circuits in the frequency selecting system are introduced in detail.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs
Desjardins, Guillaume; Bengio, Yoshua
2010-01-01
Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...
MaxOcc: a web portal for maximum occurrence analysis.
Bertini, Ivano; Ferella, Lucio; Luchinat, Claudio; Parigi, Giacomo; Petoukhov, Maxim V; Ravera, Enrico; Rosato, Antonio; Svergun, Dmitri I
2012-08-01
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocc . It can be used freely upon registration to the grid with a digital certificate.
Malaria haplotype frequency estimation.
Wigger, Leonore; Vogt, Julia E; Roth, Volker
2013-09-20
We present a Bayesian approach for estimating the relative frequencies of multi-single nucleotide polymorphism (SNP) haplotypes in populations of the malaria parasite Plasmodium falciparum by using microarray SNP data from human blood samples. Each sample comes from a malaria patient and contains one or several parasite clones that may genetically differ. Samples containing multiple parasite clones with different genetic markers pose a special challenge. The situation is comparable with a polyploid organism. The data from each blood sample indicates whether the parasites in the blood carry a mutant or a wildtype allele at various selected genomic positions. If both mutant and wildtype alleles are detected at a given position in a multiply infected sample, the data indicates the presence of both alleles, but the ratio is unknown. Thus, the data only partially reveals which specific combinations of genetic markers (i.e. haplotypes across the examined SNPs) occur in distinct parasite clones. In addition, SNP data may contain errors at non-negligible rates. We use a multinomial mixture model with partially missing observations to represent this data and a Markov chain Monte Carlo method to estimate the haplotype frequencies in a population. Our approach addresses both challenges, multiple infections and data errors.
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Rakesh R. Pathak
2012-02-01
Full Text Available Based on the law of large numbers which is derived from probability theory, we tend to increase the sample size to the maximum. Central limit theorem is another inference from the same probability theory which approves largest possible number as sample size for better validity of measuring central tendencies like mean and median. Sometimes increase in sample-size turns only into negligible betterment or there is no increase at all in statistical relevance due to strong dependence or systematic error. If we can afford a little larger sample, statistically power of 0.90 being taken as acceptable with medium Cohen's d (<0.5 and for that we can take a sample size of 175 very safely and considering problem of attrition 200 samples would suffice. [Int J Basic Clin Pharmacol 2012; 1(1.000: 43-44
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Nielsen, Morten Ø.; Frederiksen, Per Houmann
2005-01-01
In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...
Estimation of recombination frequency in genetic linkage studies.
Nordheim, E V; O'Malley, D M; Guries, R P
1983-09-01
A binomial-like model is developed that may be used in genetic linkage studies when data are generated by a testcross with parental phase unknown. Four methods of estimation for the recombination frequency are compared for data from a single group and also from several groups; these methods are maximum likelihood, two Bayesian procedures, and an ad hoc technique. The Bayes estimator using a noninformative prior usually has a lower mean squared error than the other estimators and because of this it is the recommended estimator. This estimator appears particularly useful for estimation of recombination frequencies indicative of weak linkage from samples of moderate size. Interval estimates corresponding to this estimator can be obtained numerically by discretizing the posterior distribution, thereby providing researchers with a range of plausible recombination values. Data from a linkage study on pitch pine are used as an example.
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
Rolling element bearing faults diagnosis based on kurtogram and frequency domain correlated kurtosis
Gu, Xiaohui; Yang, Shaopu; Liu, Yongqiang; Hao, Rujiang
2016-12-01
Envelope analysis is one of the most useful methods in localized fault diagnosis of rolling element bearings. However, there is a challenge in selecting the optimal resonance band. In this paper, a novel method based on kurtogram and frequency domain correlated kurtosis is proposed. To obtain the correct relationship between the node and frequency band in wavelet packet transform, a vital process named frequency ordering is conducted to solve the frequency folding problem due to down sampling. Correlated kurtosis of envelope spectrum instead of correlated kurtosis of envelope signal or kurtosis of envelope spectrum is utilized to generate the kurtogram, in which the maximum value can indicate the optimal band for envelope analysis. Several cases of experimental bearing fault signals are used to evaluate the immunity of the proposed method to strong noise interference. The improved performance has also been compared with two previous developed methods. The results demonstrate the effectiveness and robustness of the method in fault diagnosis of rolling element bearings.
Tillé, Yves
2006-01-01
Important progresses in the methods of sampling have been achieved. This book draws up an inventory of methods that can be useful for selecting samples. Forty-six sampling methods are described in the framework of general theory. This book is suitable for experienced statisticians who are familiar with the theory of survey sampling.
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Ross, Kenneth N.
1987-01-01
This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Rainfall Maximum Intensities for Urban Hydrological Design in Mexican Republic
Campos–Aranda D.F.
2010-04-01
Full Text Available Firstly, through the urban hydrosystem concept and through urbanization, the difficulties and approach of the urban flood estimation are established, based in the Intensity–Duration–Frequency curves (IDF. Next, in 10 recording gages located in very different geographic zones, a procedure is contrasted for IDF estimation curves, which utilized the Chen formula and the available information in the Mexican Republic for isohyet intensities and annual daily maximum rainfall. Late, having verified their capacity and approximation to reproduce the IDF curves, the utilized procedure was applied in 45 important locations of the country, showing the results. Lastly, the conclusions are formulated, which point out the approximation and simplicity of the proposal procedure.
Neal, R M
2000-01-01
Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position, or more generally, with some update that leaves the uniform distribution over this slice invariant. Variations on such `slice sampling' methods are easily implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made. It is therefore attractive f...
Liarte, Danilo B; Transtrum, Mark K; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P
2016-01-01
We review our work on theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces. These limits are of key relevance to current and future accelerating cavities, especially those made of new higher-$T_c$ materials such as Nb$_3$Sn, NbN, and MgB$_2$. We summarize our calculations of the so-called superheating field $H_{\\mathrm{sh}}$, beyond which flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and disorder. Will we need to control surface orientation in the layered compound MgB$_2$? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. T...
Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.
2017-03-01
Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.
Millisecond Pulsar Ages: Implications of Binary Evolution and a Maximum Spin Frequency
Kiziltan, Bulent
2009-01-01
In the absence of constraints from the binary companion or supernova remnant, the standard method for estimating pulsar ages is to infer an age from the rate of spin-down. While the generic spin-down age may give realistic estimates for normal pulsars, it can fail for pulsars with very short periods. Details of the spin-up process during the low mass X-ray binary phase pose additional constraints on the period (P) and spin-down rates (Pdot) that may consequently affect the age estimate. Here, we propose a new recipe to estimate millisecond pulsar (MSP) ages that parametrically incorporates constraints arising from binary evolution and limiting physics. We show that the standard method can be improved by this approach to achieve age estimates closer to the true age whilst the standard spin-down age may over- or under-estimate the age of the pulsar by more than a factor of ~10 in the millisecond regime. We use this approach to analyze the population on a broader scale. For instance, in order to understand the d...
Kinematic analysis of sprinting pickup acceleration versus maximum sprinting speed
S. MANZER
2016-10-01
Full Text Available Pickup acceleration and maximum sprinting speed are two essential phases of the 100-m sprint with variant sprinting speed, step length, frequency and technique. The aim of the study was to describe and compare the kinematic parameters of both sprint variants. Hypothetically it was assumed to find differences in sprinting speed, step length, flight and contact times as well as between the body angles of different key positions. From 8 female and 8 male (N=16 track and field junior athletes a double stride of both sprint variants was filmed (200 Hz from a sagittal position and the 10-m-sprint time was measured using triple light barriers. Kinematic data for sprinting speed and angles of knee, hip and ankle were compared with an analysis of variance with repeated measures. The sprinting speed was 7.7 m/s and 8.0 m/s (female and 8.4 m/s and 9.2 m/s (male with significantly higher values of step length, flight time and shorter ground contact time during maximum sprinting speed. Because of the longer flight time, it is possible to place the foot closer to the body but with a more extended knee on the ground. These characteristics can be used as orientation for technique training.
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Architectural Design Space Exploration of an FPGA-based Compressed Sampling Engine
El-Sayed, Mohammad; Koch, Peter; Le Moullec, Yannick
2015-01-01
We present the architectural design space exploration of a compressed sampling engine for use in a wireless heart-rate monitoring system. We show how parallelism affects execution time at the register transfer level. Furthermore, two example solutions (modified semi-parallel and full-parallel) se......-parallel) selected from the design space are prototyped on an Altera Cyclone III FPGA platform; in both cases the FPGA resource usage is less than 1% and the maximum frequency is 250 MHz....
Adjust or Synchronize LM2586/88 Switching Frequency
2003-01-01
INTRODUCTIONSwitching frequency is a very important parameter in switchingpower converters. As the switching frequency increases, the physicalsize of magnetic elements and other components in the circuit reducesignificantly. Switching frequency also plays a great role incontrol loop gain and compensation design. Switching frequency determinesthe maximum allowable bandwidth of the control loop.Switching frequency is also important parameter for EMI and noiseissues. The EMI spectrum is a direct function of the switching fre-
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Maximum SINR Synchronization Strategies in Multiuser Filter Bank Schemes
Pecile Francesco
2010-01-01
Full Text Available We consider synchronization in a multiuser filter bank uplink system with single-user detection. Perfect user synchronization is not the optimal choice as the intuition would suggest. To maximize performance the synchronization parameters have to be chosen to maximize the signal-to-interference-plus-noise ratio (SINR at each equalizer subchannel output. However, the resulting filter bank receiver structure becomes complex. Therefore, we consider two simplified synchronization metrics that are based on the maximization of the average SINR of a given user or the aggregate SINR of all users. Furthermore, a relaxation of the aggregate SINR metric allows implementing an efficient multiuser analysis filter bank. This receiver deploys two fractionally spaced analysis stages. Each analysis stage is efficiently implemented via a polyphase filter bank, followed by an extended discrete Fourier transform that allows the user frequency offsets to be partly compensated. Then, sub-channel maximum SINR equalization is used. We discuss the application of the proposed solution to Orthogonal Frequency Division Multiple Access (OFDMA and multiuser Filtered Multitone (FMT systems.
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
Maximum length sequence and Bessel diffusers using active technologies
Cox, Trevor J.; Avis, Mark R.; Xiao, Lejun
2006-02-01
Active technologies can enable room acoustic diffusers to operate over a wider bandwidth than passive devices, by extending the bass response. Active impedance control can be used to generate surface impedance distributions which cause wavefront dispersion, as opposed to the more normal absorptive or pressure-cancelling target functions. This paper details the development of two new types of active diffusers which are difficult, if not impossible, to make as passive wide-band structures. The first type is a maximum length sequence diffuser where the well depths are designed to be frequency dependent to avoid the critical frequencies present in the passive device, and so achieve performance over a finite-bandwidth. The second is a Bessel diffuser, which exploits concepts developed for transducer arrays to form a hybrid absorber-diffuser. Details of the designs are given, and measurements of scattering and impedance used to show that the active diffusers are operating correctly over a bandwidth of about 100 Hz to 1.1 kHz. Boundary element method simulation is used to show how more application-realistic arrays of these devices would behave.
A linear temperature-to-frequency converter
Løvborg, Leif
1965-01-01
The possibility of converting temperature into a frequency signal by means of a thermistor which is part of the frequency-determining network of an RC oscillator is investigated. It is shown that a temperature - frequency characteristic which has a point of inflection may be realized, and that th......The possibility of converting temperature into a frequency signal by means of a thermistor which is part of the frequency-determining network of an RC oscillator is investigated. It is shown that a temperature - frequency characteristic which has a point of inflection may be realized......, and that the maximum value of the temperature-frequency coefficient beta in this point is-1/3 alpha, where a is the temperature coefficient of the thermistor at the corresponding temperature. Curves showing the range in which the converter is expected to be linear to within plusmn0.1 degC are given. A laboratory...
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
... several times a day using capillary blood sampling. Disadvantages to capillary blood sampling include: Only a limited ... do not constitute endorsements of those other sites. Copyright 1997-2017, A.D.A.M., Inc. Duplication ...
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Mesfin Dema
2014-05-01
Full Text Available We introduce a novel Maximum Entropy (MaxEnt framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated.
Detection of Ochratoxin A in bread samples in Shahrekord city, Iran, 2011-2012
Mehran Erfani
2013-12-01
Results: Ochratoxin A was detected in 45 out of the 86 bread samples (52.3%. Levels of OTA in positive samples ranged between 0.19 and 10.37 ng/g and the average contamination of all positive samples was 3.04 ng/g . The highest frequency of positive samples was related to machinery Taftoon (88.8% and Lavash bread (81.8%. The most contaminated sample (5.39 ng/g was found in the Iranian Lavash bread. Fifteen of the positive samples exceed the maximum level of 5 ng/g set by European regulations for OTA in cereal and bread. Conclusion: The results of this study indicated that contamination levels of ochratoxin A were high in part of the samples (17.4%. Bread and cereals are considered to be the main and predominant ingredient of Iranian food; therefore, their contamination can have long-term negative impact on people's health.
High-frequency EEG activity in epileptic encephalopathy with suppression-burst.
Toda, Yoshihiro; Kobayashi, Katsuhiro; Hayashi, Yumiko; Inoue, Takushi; Oka, Makio; Endo, Fumika; Yoshinaga, Harumi; Ohtsuka, Yoko
2015-02-01
We explored high-frequency activity in the suppression-burst (SB) pattern of interictal electroencephalogram (EEG) in early infantile epileptic encephalopathy including Ohtahara syndrome (OS) and early myoclonic encephalopathy (EME) to investigate the pathophysiological characteristics of SB. Subjects included six patients with the SB EEG pattern related to OS or EME (Group SB). The results were evaluated in comparison to tracé alternant (TA) observed during the neonatal period in nine patients to rule out possible nonspecific relationships between high-frequency activity and periodic EEG patterns (Group TA). EEG was digitally recorded with a sampling rate of 500Hz and the analysis was performed in each of the particular bipolar channel-pairs. We visually selected 20 typical consecutive burst sections and 160 inter-burst sections for comparison from the sleep record of each patient and performed the time-frequency analysis. We investigated the maximum frequencies of power enhancement in each derivation in both groups. In Group SB, a significant increase in power at a frequency of 80-150Hz was observed in association with the bursts, particularly in the bilateral parieto-occipital derivations, in all patients. In Group TA, on the contrary, no significant increase in high-frequency power was found. The maximum frequencies of power enhancement were significantly higher in Group SB than in Group TA (phigh frequencies of up to 150Hz were detected in the suppression-burst EEG patterns in epileptic encephalopathy in early infancy. Further studies will be necessary to identify the role of the interictal high-frequency activity in the pathophysiology of such early epileptic encephalopathy. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Frequency position modulation using multi-spectral projections
Goodman, Joel; Bertoncini, Crystal; Moore, Michael; Nousain, Bryan; Cowart, Gregory
2012-10-01
In this paper we present an approach to harness multi-spectral projections (MSPs) to carefully shape and locate tones in the spectrum, enabling a new and robust modulation in which a signal's discrete frequency support is used to represent symbols. This method, called Frequency Position Modulation (FPM), is an innovative extension to MT-FSK and OFDM and can be non-uniformly spread over many GHz of instantaneous bandwidth (IBW), resulting in a communications system that is difficult to intercept and jam. The FPM symbols are recovered using adaptive projections that in part employ an analog polynomial nonlinearity paired with an analog-to-digital converter (ADC) sampling at a rate at that is only a fraction of the IBW of the signal. MSPs also facilitate using commercial of-the-shelf (COTS) ADCs with uniform-sampling, standing in sharp contrast to random linear projections by random sampling, which requires a full Nyquist rate sample-and-hold. Our novel communication system concept provides an order of magnitude improvement in processing gain over conventional LPI/LPD communications (e.g., FH- or DS-CDMA) and facilitates the ability to operate in interference laden environments where conventional compressed sensing receivers would fail. We quantitatively analyze the bit error rate (BER) and processing gain (PG) for a maximum likelihood based FPM demodulator and demonstrate its performance in interference laden conditions.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.