Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Rezaeian Mahdi
2015-01-01
Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Universal Sampling Rate Distortion
Boda, Vinay Praneeth; Narayan, Prakash
2017-01-01
We examine the coordinated and universal rate-efficient sampling of a subset of correlated discrete memoryless sources followed by lossy compression of the sampled sources. The goal is to reconstruct a predesignated subset of sources within a specified level of distortion. The combined sampling mechanism and rate distortion code are universal in that they are devised to perform robustly without exact knowledge of the underlying joint probability distribution of the sources. In Bayesian as wel...
Recommended Maximum Temperature For Mars Returned Samples
Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.
2016-01-01
The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.
Maximum Likelihood Under Response Biased Sampling\\ud
Chambers, Raymond; Dorfman, Alan; Wang, Suojin
2003-01-01
Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...
Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.
Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S
2014-02-18
NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical
Mean square convergence rates for maximum quasi-likelihood estimator
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
Fast Forward Maximum entropy reconstruction of sparsely sampled data.
Balsgart, Nicholas M; Vosegaard, Thomas
2012-10-01
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
The tropical lapse rate steepened during the Last Glacial Maximum.
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E.; Russell, James M.; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S.; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F. Alayne; Kelly, Meredith A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted. PMID:28138544
Maximum orbit plane change with heat-transfer-rate considerations
Lee, J. Y.; Hull, D. G.
1990-01-01
Two aerodynamic maneuvers are considered for maximizing the plane change of a circular orbit: gliding flight with a maximum thrust segment to regain lost energy (aeroglide) and constant altitude cruise with the thrust being used to cancel the drag and maintain a high energy level (aerocruise). In both cases, the stagnation heating rate is limited. For aeroglide, the controls are the angle of attack, the bank angle, the time at which the burn begins, and the length of the burn. For aerocruise, the maneuver is divided into three segments: descent, cruise, and ascent. During descent the thrust is zero, and the controls are the angle of attack and the bank angle. During cruise, the only control is the assumed-constant angle of attack. During ascent, a maximum thrust segment is used to restore lost energy, and the controls are the angle of attack and bank angle. The optimization problems are solved with a nonlinear programming code known as GRG2. Numerical results for the Maneuverable Re-entry Research Vehicle with a heating-rate limit of 100 Btu/ft(2)-s show that aerocruise gives a maximum plane change of 2 deg, which is only 1 deg larger than that of aeroglide. On the other hand, even though aerocruise requires two thrust levels, the cruise characteristics of constant altitude, velocity, thrust, and angle of attack are easy to control.
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate.
Securing maximum diversity of Non Pollen Palynomorphs in palynological samples
Enevold, Renée; Odgaard, Bent Vad
2015-01-01
Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important to the interpreta......Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important.......g. Schulz & Shumilovskikh 2013). Increasingly it has become customary for palynologists to quantify at least some of the NPP’s appearing on the pollen slides (e.g. Strother et al. 2015, Odgaard 1994). Are these samples representative of the initial NPP assemblages? The usual sample preparation method...... for pollen analysis is based on acetylization (Erdtman 1969) and HF-treatment which are of variable destructiveness to the NPP’s. Some NPP’s might completely vanish and the prepared sample might hold less NPP diversity than the initial NPP assemblage. Consequently, it may be advisable to consider...
47 CFR 65.700 - Determining the maximum allowable rate of return.
2010-10-01
... CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Maximum Allowable Rates of Return § 65.700 Determining the maximum allowable rate of return. (a) The maximum allowable rate of return for any exchange carrier's earnings on any access service category shall...
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, S.E.; Russell, J.M.; Verschuren, D.; Morrill, C.; De Cort, G.; Sinninghe Damsté, J.S.; Olago, D.; Eggermont, H.; Street-Perrott, F.A.; Kelly, M.A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become lesssteep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountainenvironments. However, the sensitivity of the lapse rate to climate
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S|info:eu-repo/dai/nl/07401370X; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate
Jan Werner; Eva Maria Griebeler
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which...
A Maximum Information Rate Quaternion Filter for Spacecraft Attitude Estimation
Reijneveld, J.; Maas, A.; Choukroun, D.; Kuiper, J.M.
2011-01-01
Building on previous works, this paper introduces a novel continuous-time stochastic optimal linear quaternion estimator under the assumptions of rate gyro measurements and of vector observations of the attitude. A quaternion observation model, which observation matrix is rank degenerate, is reduced
78 FR 13999 - Maximum Interest Rates on Guaranteed Farm Loans
2013-03-04
... have removed the term. Comment: Don't remove the ``average agricultural loan customer'' definition. The... the following methods: Federal eRulemaking Portal: Go to http://www.regulations.gov . Follow the.... Comment: FSA should let the market dictate what interest rate lenders charge guaranteed borrowers, rather...
Sampling rate and aliasing on a virtual laboratory
Mihai Bogdan
2009-10-01
Full Text Available The sampling frequency determines thequality of the analog signal that is converted. Highersampling frequency achieves better conversion of theanalog signals. The minimum sampling frequencyrequired to represent the signal should at least be twicethe maximum frequency of the analog signal undertest (this is called the Nyquist rate. In the followingvirtual instrument, an example of sampling is shown.If the sampling frequency is equal or less then twicethe frequency of the input signal, a signal of lowerfrequency is generated from such a process (this iscalled aliasing.The goal of this paper is to teach students basicconcepts of sampling rate and aliasing, to becomefamiliar with this concepts.
9 CFR 381.68 - Maximum inspection rates-New turkey inspection system.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Maximum inspection rates-New turkey inspection system. 381.68 Section 381.68 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE... Procedures § 381.68 Maximum inspection rates—New turkey inspection system. (a) The maximum inspection...
Jan Werner
Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule
Werner, Jan; Griebeler, Eva Maria
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of
Perkell, J S; Hillman, R E; Holmberg, E B
1994-08-01
In previous reports, aerodynamic and acoustic measures of voice production were presented for groups of normal male and female speakers [Holmberg et al., J. Acoust. Soc. Am. 84, 511-529 (1988); J. Voice 3, 294-305 (1989)] that were used as norms in studies of voice disorders [Hillman et al., J. Speech Hear. Res. 32, 373-392 (1989); J. Voice 4, 52-63 (1990)]. Several of the measures were extracted from glottal airflow waveforms that were derived by inverse filtering a high-time-resolution oral airflow signal. Recently, the methods have been updated and a new study of additional subjects has been conducted. This report presents previous (1988) and current (1993) group mean values of sound pressure level, fundamental frequency, maximum airflow declination rate, ac flow, peak flow, minimum flow, ac-dc ratio, inferred subglottal air pressure, average flow, and glottal resistance. Statistical tests indicate overall group differences and differences for values of several individual parameters between the 1988 and 1993 studies. Some inter-study differences in parameter values may be due to sampling effects and minor methodological differences; however, a comparative test of 1988 and 1993 inverse filtering algorithms shows that some lower 1988 values of maximum flow declination rate were due at least in part to excessive low-pass filtering in the 1988 algorithm. The observed differences should have had a negligible influence on the conclusions of our studies of voice disorders.
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling
Saeed Mian Qaisar
2009-01-01
Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.
13 CFR 107.845 - Maximum rate of amortization on Loans and Debt Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum rate of amortization on... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring... rate of amortization on Loans and Debt Securities. The principal of any Loan (or the loan portion...
Jones, R.B.; Cogan, J.D. Jr.
1991-09-19
An investigation was done to determine the maximum credible event value for samples of explosives and disassembled components up to 1.2 g when stored in conductive plastic vials as packaged and handled, stored, or transported at Mound. The test was performed at Test Firing, with photographs taken before and after the test. The standard propagation test setup was used; a vial containing 1.2 g of PETN (pentaerythritol tetranitrate) was surrounded by other like vials containing 1.2-g samples of PETN. The 1.2-g PETN pellet was then ignited by an EX-12 detonator. The test showed that there was no propagation and that the maximum credible event value for the handling tray is 1.2 g. The test also showed that when the tray is placed in a metal container the MCE value will still be 1.2 g. 9 figs.
The Scaling of Maximum and Basal Metabolic Rates of Mammals and Birds
Barbosa, L A; Silva, J K L; Barbosa, Lauro A.; Garcia, Guilherme J. M.; Silva, Jafferson K. L. da
2004-01-01
Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as $M^{6/7}$, maximum heart rate as $M^{-1/7}$, and muscular capillary density as $M^{-1/7}$, in agreement with data.
Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?
McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Morrison, Glenn; Shaughnessy, Richard; Shu, Shi
2011-02-01
A Monte Carlo analysis of indoor ozone levels in four cities was applied to provide guidance to regulatory agencies on setting maximum ozone emission rates from consumer appliances. Measured distributions of air exchange rates, ozone decay rates and outdoor ozone levels at monitoring stations were combined with a steady-state indoor air quality model resulting in emission rate distributions (mg h -1) as a function of % of building hours protected from exceeding a target maximum indoor concentration of 20 ppb. Whole-year, summer and winter results for Elizabeth, NJ, Houston, TX, Windsor, ON, and Los Angeles, CA exhibited strong regional differences, primarily due to differences in air exchange rates. Infiltration of ambient ozone at higher average air exchange rates significantly reduces allowable emission rates, even though air exchange also dilutes emissions from appliances. For Houston, TX and Windsor, ON, which have lower average residential air exchange rates, emission rates ranged from -1.1 to 2.3 mg h -1 for scenarios that protect 80% or more of building hours from experiencing ozone concentrations greater than 20 ppb in summer. For Los Angeles, CA and Elizabeth, NJ, with higher air exchange rates, only negative emission rates were allowable to provide the same level of protection. For the 80th percentile residence, we estimate that an 8-h average limit concentration of 20 ppb would be exceeded, even in the absence of an indoor ozone source, 40 or more days per year in any of the cities analyzed. The negative emission rates emerging from the analysis suggest that only a zero-emission rate standard is prudent for Los Angeles, Elizabeth, NJ and other regions with higher summertime air exchange rates. For regions such as Houston with lower summertime air exchange rates, the higher emission rates would likely increase occupant exposure to the undesirable products of ozone reactions, thus reinforcing the need for zero-emission rate standard.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
17 CFR 148.7 - Rulemaking on maximum rates for attorney fees.
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Rulemaking on maximum rates for attorney fees. 148.7 Section 148.7 Commodity and Securities Exchanges COMMODITY FUTURES TRADING... increase in the cost of living or by special circumstances (such as limited availability of...
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
2011-01-01
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males an
Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks
Paul H. Evangelista
2011-05-01
Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability
Dell, Z. R.; Pandian, A.; Bhowmick, A. K.; Swisher, N. C.; Stanic, M.; Stellingwerf, R. F.; Abarzhi, S. I.
2017-09-01
We focus on the classical problem of the dependence on the initial conditions of the initial growth-rate of strong shock driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics simulations to describe the simulation data with statistical confidence in a broad parameter regime. For the given values of the shock strength, fluid density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of the RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data.
Gutenberg-Richter b-value maximum likelihood estimation and sample size
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2017-01-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Benício, Kadja; Dias, Fernando A. L.; Gualdi, Lucien P.; Aliverti, Andrea; Resqueti, Vanessa R.; Fregonezi, Guilherme A. F.
2015-01-01
OBJECTIVE: To assess the influence of diaphragmatic activation control (diaphC) on Sniff Nasal-Inspiratory Pressure (SNIP) and Maximum Relaxation Rate of inspiratory muscles (MRR) in healthy subjects. METHOD: Twenty subjects (9 male; age: 23 (SD=2.9) years; BMI: 23.8 (SD=3) kg/m2; FEV1/FVC: 0.9 (SD=0.1)] performed 5 sniff maneuvers in two different moments: with or without instruction on diaphC. Before the first maneuver, a brief explanation was given to the subjects on how to perform the sniff test. For sniff test with diaphC, subjects were instructed to perform intense diaphragm activation. The best SNIP and MRR values were used for analysis. MRR was calculated as the ratio of first derivative of pressure over time (dP/dtmax) and were normalized by dividing it by peak pressure (SNIP) from the same maneuver. RESULTS: SNIP values were significantly different in maneuvers with and without diaphC [without diaphC: -100 (SD=27.1) cmH2O/ with diaphC: -72.8 (SD=22.3) cmH2O; p<0.0001], normalized MRR values were not statistically different [without diaphC: -9.7 (SD=2.6); with diaphC: -8.9 (SD=1.5); p=0.19]. Without diaphC, 40% of the sample did not reach the appropriate sniff criteria found in the literature. CONCLUSION: Diaphragmatic control performed during SNIP test influences obtained inspiratory pressure, being lower when diaphC is performed. However, there was no influence on normalized MRR. PMID:26578254
Effects of electric field on the maximum electro-spinning rate of silk fibroin solutions.
Park, Bo Kyung; Um, In Chul
2017-02-01
Owing to the excellent cyto-compatibility of silk fibroin (SF) and the simple fabrication of nano-fibrous webs, electro-spun SF webs have attracted much research attention in numerous biomedical fields. Because the production rate of electro-spun webs is strongly dependent on the electro-spinning rate used, the electro-spinning rate becomes more important. In the present study, to improve the electro-spinning rate of SF solutions, various electric fields were applied during electro-spinning of SF, and its effects on the maximum electro-spinning rate of SF solution as well as diameters and molecular conformations of the electro-spun SF fibers were examined. As the electric field was increased, the maximum electro-spinning rate of the SF solution also increased. The maximum electro-spinning rate of a 13% SF solution could be increased 12×by increasing the electric field from 0.5kV/cm (0.25mL/h) to 2.5kV/cm (3.0mL/h). The dependence of the fiber diameter on the present electric field was not significant when using less-concentrated SF solutions (7-9% SF). On the other hand, at higher SF concentrations the electric field had a greater effect on the resulting fiber diameter. The electric field had a minimal effect of the molecular conformation and crystallinity index of the electro-spun SF webs. Copyright © 2016 Elsevier B.V. All rights reserved.
Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?
McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan
2016-07-01
Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-07-30
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.
On the rate of convergence of the maximum likelihood estimator of a k-monotone density
WELLNER; Jon; A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.
On the rate of convergence of the maximum likelihood estimator of a K-monotone density
GAO FuChang; WELLNER Jon A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.
Joint maximum likelihood estimation of carrier and sampling frequency offsets for OFDM systems
Kim, Y H
2010-01-01
In orthogonal-frequency division multiplexing (OFDM) systems, carrier and sampling frequency offsets (CFO and SFO, respectively) can destroy the orthogonality of the subcarriers and degrade system performance. In the literature, Nguyen-Le, Le-Ngoc, and Ko proposed a simple maximum-likelihood (ML) scheme using two long training symbols for estimating the initial CFO and SFO of a recursive least-squares (RLS) estimation scheme. However, the results of Nguyen-Le's ML estimation show poor performance relative to the Cramer-Rao bound (CRB). In this paper, we extend Moose's CFO estimation algorithm to joint ML estimation of CFO and SFO using two long training symbols. In particular, we derive CRBs for the mean square errors (MSEs) of CFO and SFO estimation. Simulation results show that the proposed ML scheme provides better performance than Nguyen-Le's ML scheme.
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
Seymour, Roger S
2010-09-01
Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...... that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables......The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements...
Determination of zero-coupon and spot rates from treasury data by maximum entropy methods
Gzyl, Henryk; Mayoral, Silvia
2016-08-01
An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.
Espino, Susana; Schenk, H Jochen
2011-01-01
The maximum specific hydraulic conductivity (k(max)) of a plant sample is a measure of the ability of a plants' vascular system to transport water and dissolved nutrients under optimum conditions. Precise measurements of k(max) are needed in comparative studies of hydraulic conductivity, as well as for measuring the formation and repair of xylem embolisms. Unstable measurements of k(max) are a common problem when measuring woody plant samples and it is commonly observed that k(max) declines from initially high values, especially when positive water pressure is used to flush out embolisms. This study was designed to test five hypotheses that could potentially explain declines in k(max) under positive pressure: (i) non-steady-state flow; (ii) swelling of pectin hydrogels in inter-vessel pit membranes; (iii) nucleation and coalescence of bubbles at constrictions in the xylem; (iv) physiological wounding responses; and (v) passive wounding responses, such as clogging of the xylem by debris. Prehydrated woody stems from Laurus nobilis (Lauraceae) and Encelia farinosa (Asteraceae) collected from plants grown in the Fullerton Arboretum in Southern California, were used to test these hypotheses using a xylem embolism meter (XYL'EM). Treatments included simultaneous measurements of stem inflow and outflow, enzyme inhibitors, stem-debarking, low water temperatures, different water degassing techniques, and varied concentrations of calcium, potassium, magnesium, and copper salts in aqueous measurement solutions. Stable measurements of k(max) were observed at concentrations of calcium, potassium, and magnesium salts high enough to suppress bubble coalescence, as well as with deionized water that was degassed using a membrane contactor under strong vacuum. Bubble formation and coalescence under positive pressure in the xylem therefore appear to be the main cause for declining k(max) values. Our findings suggest that degassing of water is essential for achieving stable and
Michael D. Hare
2014-09-01
Full Text Available A field trial in northeast Thailand during 2011–2013 compared the establishment and growth of 2 Panicum maximum cultivars, Mombasa and Tanzania, sown at seeding rates of 2, 4, 6, 8, 10 and 12 kg/ha. In the first 3 months of establishment, higher sowing rates produced significantly more DM than sowing at 2 kg/ha, but thereafter there were no significant differences in total DM production between sowing rates of 2–12 kg/ha. Lower sowing rates produced fewer tillers/m2 than higher sowing rates but these fewer tillers were significantly heavier than the more numerous smaller tillers produced by higher sowing rates. Mombasa produced 23% more DM than Tanzania in successive wet seasons (7,060 vs. 5,712 kg DM/ha from 16 June to 1 November 2011; and 16,433 vs. 13,350 kg DM/ha from 25 April to 24 October 2012. Both cultivars produced similar DM yields in the dry seasons (November–April, averaging 2,000 kg DM/ha in the first dry season and 1,750 kg DM/ha in the second dry season. Mombasa produced taller tillers (104 vs. 82 cm, longer leaves (60 vs. 47 cm, wider leaves (2 vs. 1.8 cm and heavier tillers (1 vs. 0.7 g than Tanzania but fewer tillers/m2 (260 vs. 304. If farmers improve soil preparation and place more emphasis on sowing techniques, there is potential to dramatically reduce seed costs.Keywords: Guinea grass, tillering, forage production, seeding rates, Thailand.DOI: 10.17138/TGFT(2246-253
Evangelia Karagianni
2016-04-01
Full Text Available By utilizing meteorological data such as relative humidity, temperature, pressure, rain rate and precipitation duration at eight (8 stations in Aegean Archipelagos from six recent years (2007 – 2012, the effect of the weather on Electromagnetic wave propagation is studied. The EM wave propagation characteristics depend on atmospheric refractivity and consequently on Rain-Rate which vary in time and space randomly. Therefore the statistics of radio refractivity, Rain-Rate and related propagation effects are of main interest. This work investigates the maximum value of rain rate in monthly rainfall records, for a 5 min interval comparing it with different values of integration time as well as different percentages of time. The main goal is to determine the attenuation level for microwave links based on local rainfall data for various sites in Greece (L-zone, namely Aegean Archipelagos, with a view on improved accuracy as compared with more generic zone data available. A measurement of rain attenuation for a link in the S-band has been carried out and the data compared with prediction based on the standard ITU-R method.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Kalafut, Bennett; Visscher, Koen
2008-10-01
Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
Phylogenetic prediction of the maximum per capita rate of population growth.
Fagan, William F; Pearson, Yanthe E; Larsen, Elise A; Lynch, Heather J; Turner, Jessica B; Staver, Hilary; Noble, Andrew E; Bewick, Sharon; Goldberg, Emma E
2013-07-22
The maximum per capita rate of population growth, r, is a central measure of population biology. However, researchers can only directly calculate r when adequate time series, life tables and similar datasets are available. We instead view r as an evolvable, synthetic life-history trait and use comparative phylogenetic approaches to predict r for poorly known species. Combining molecular phylogenies, life-history trait data and stochastic macroevolutionary models, we predicted r for mammals of the Caniformia and Cervidae. Cross-validation analyses demonstrated that, even with sparse life-history data, comparative methods estimated r well and outperformed models based on body mass. Values of r predicted via comparative methods were in strong rank agreement with observed values and reduced mean prediction errors by approximately 68 per cent compared with two null models. We demonstrate the utility of our method by estimating r for 102 extant species in these mammal groups with unknown life-history traits.
Statistical properties of the maximum Lyapunov exponent calculated via the divergence rate method.
Franchi, Matteo; Ricci, Leonardo
2014-12-01
The embedding of a time series provides a basic tool to analyze dynamical properties of the underlying chaotic system. To this purpose, the choice of the embedding dimension and lag is crucial. Although several methods have been devised to tackle the issue of the optimal setting of these parameters, a conclusive criterion to make the most appropriate choice is still lacking. An accepted procedure to rank different embedding methods relies on the evaluation of the maximum Lyapunov exponent (MLE) out of embedded time series that are generated by chaotic systems with explicit analytic representation. The MLE is evaluated as the local divergence rate of nearby trajectories. Given a system, embedding methods are ranked according to how close such MLE values are to the true MLE. This is provided by the so-called standard method in a way that exploits the mathematical description of the system and does not require embedding. In this paper we study the dependence of the finite-time MLE evaluated via the divergence rate method on the embedding dimension and lag in the case of time series generated by four systems that are widely used as references in the scientific literature. We develop a completely automatic algorithm that provides the divergence rate and its statistical uncertainty. We show that the uncertainty can provide useful information about the optimal choice of the embedding parameters. In addition, our approach allows us to find which systems provide suitable benchmarks for the comparison and ranking of different embedding methods.
Maris, E.
1998-01-01
The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…
Alvah C. Stahlnecker IV
2008-12-01
Full Text Available A percentage of either measured or predicted maximum heart rate is commonly used to prescribe and measure exercise intensity. However, maximum heart rate in athletes may be greater during competition or training than during laboratory exercise testing. Thus, the aim of the present investigation was to determine if endurance-trained runners train and compete at or above laboratory measures of 'maximum' heart rate. Maximum heart rates were measured utilising a treadmill graded exercise test (GXT in a laboratory setting using 10 female and 10 male National Collegiate Athletic Association (NCAA division 2 cross-country and distance event track athletes. Maximum training and competition heart rates were measured during a high-intensity interval training day (TR HR and during competition (COMP HR at an NCAA meet. TR HR (207 ± 5.0 b·min-1; means ± SEM and COMP HR (206 ± 4 b·min-1 were significantly (p < 0.05 higher than maximum heart rates obtained during the GXT (194 ± 2 b·min-1. The heart rate at the ventilatory threshold measured in the laboratory occurred at 83.3 ± 2.5% of the heart rate at VO2 max with no differences between the men and women. However, the heart rate at the ventilatory threshold measured in the laboratory was only 77% of the maximal COMP HR or TR HR. In order to optimize training-induced adaptation, training intensity for NCAA division 2 distance event runners should not be based on laboratory assessment of maximum heart rate, but instead on maximum heart rate obtained either during training or during competition
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...
Bonavolontà, Francesco; D'Apuzzo, Massimo; Liccardo, Annalisa; Vadursi, Michele
2014-10-13
The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs) included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.
Why does steady-state magnetic reconnection have a maximum local rate of order 0.1?
Liu, Yi-Hsin; Guo, F; Daughton, W; Li, H; Cassak, P A; Shay, M A
2016-01-01
Simulations suggest collisionless steady-state magnetic reconnection of Harris-type current sheets proceeds with a rate of order 0.1, independent of dissipation mechanism. We argue this long-standing puzzle is a result of constraints at the magnetohydrodynamic (MHD) scale. We perform a scaling analysis of the reconnection rate as a function of the opening angle made by the upstream magnetic fields, finding a maximum reconnection rate close to 0.2. The predictions compare favorably to particle-in-cell simulations of relativistic electron-positron and non-relativistic electron-proton reconnection. The fact that simulated reconnection rates are close to the predicted maximum suggests reconnection proceeds near the most efficient state allowed at the MHD-scale. The rate near the maximum is relatively insensitive to the opening angle, potentially explaining why reconnection has a similar fast rate in differing models.
Oudyn, Frederik W; Lyons, David J; Pringle, M J
2012-01-01
Many scientific laboratories follow, as standard practice, a relatively short maximum holding time (within 7 days) for the analysis of total suspended solids (TSS) in environmental water samples. In this study we have subsampled from bulk water samples stored at ∼4 °C in the dark, then analysed for TSS at time intervals up to 105 days after collection. The nonsignificant differences in TSS results observed over time demonstrates that storage at ∼4 °C in the dark is an effective method of preserving samples for TSS analysis, far past the 7-day standard practice. Extending the maximum holding time will ease the pressure on sample collectors and laboratory staff who until now have had to determine TSS within an impractically short period.
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
Jorge Cuadrado Reyes
2011-05-01
Full Text Available Abstract This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A lineal regression analysis was done to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing.. Key words: workout control, functional evaluation, prediction equation.
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Optimum poultry litter rates for maximum profit vs. yield in cotton production
Cotton lint yield responds well to increasing rates of poultry litter fertilization, but little is known of how optimum rates for yield compare with optimum rates for profit. The objectives of this study were to analyze cotton lint yield response to poultry litter application rates, determine and co...
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
On the maximum rate of change in sunspot number growth and the size of the sunspot cycle
Wilson, Robert M.
1990-01-01
Statistically significant correlations exist between the size (maximum amplitude) of the sunspot cycle and, especially, the maximum value of the rate of rise during the ascending portion of the sunspot cycle, where the rate of rise is computed either as the difference in the month-to-month smoothed sunspot number values or as the 'average rate of growth' in smoothed sunspot number from sunspot minimum. Based on the observed values of these quantities (equal to 10.6 and 4.63, respectively) as of early 1989, it is inferred that cycle 22's maximum amplitude will be about 175 + or - 30 or 185 + or - 10, respectively, where the error bars represent approximately twice the average error found during cycles 10-21 from the two fits.
Dose Rate Calculations for Rotary Mode Core Sampling Exhauster
Foust, D J
2000-01-01
This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.
Sampling Rate Independent Filtration Approach for Automatic ECG Delineation
Chereda, Hryhorii; Tymoshenko, Yury
2016-01-01
In this paper different types of ECG automatic delineation approaches were overviewed. A combination of these approaches was used to create sampling rate independent filtration algorithm for automatic ECG delineation that is capable of distinguishing different morphologies of T and P waves and QRS complexes. Created filtration algorithm was compared with algorithme \\`a trous. It was investigated that continuous wavelets transform with proposed automatic adaptation for different sampling rates procedure can be used for delineation problem.
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Ambarita, Himsar; Kishinami, Koki; Daimaruya, Mashashi; Tokura, Ikuo; Kawai, Hideki; Suzuki, Jun; Kobiyama, Mashayosi; Ginting, Armansyah
The present paper is a study on the optimum plate to plate spacing for maximum heat transfer rate from a flat plate type heat exchanger. The heat exchanger consists of a number of parallel flat plates. The working fluids are flowed at the same operational conditions, either fixed pressure head or fixed fan power input. Parallel and counter flow directions of the working fluids were considered. While the volume of the heat exchanger is kept constant, plate number was varied. Hence, the spacing between plates as well as heat transfer rate will vary and there exists a maximum heat transfer rate. The objective of this paper is to seek the optimum plate to plate spacing for maximum heat transfer rate. In order to solve the problem, analytical and numerical solutions have been carried out. In the analytical solution, the correlations of the optimum plate to plate spacing as a function of the non-dimensional parameters were developed. Furthermore, the numerical simulation is carried out to evaluate the correlations. The results show that the optimum plate to plate spacing for a counter flow heat exchanger is smaller than parallel flow ones. On the other hand, the maximum heat transfer rate for a counter flow heat exchanger is bigger than parallel flow ones.
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
2014-01-01
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
7 CFR 1.187 - Rulemaking on maximum rates for attorney fees.
2010-01-01
... the types of proceedings in which the rate should be used. It also should explain fully the reasons... certain types of proceedings), the Department may adopt regulations providing that attorney fees may be awarded at a rate higher than $125 per hour in some or all of the types of proceedings covered by...
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling
Ballal, Tarig
2014-09-01
In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.
McCarthy, C M; Taylor, M A; Dennis, M W
1987-01-01
Mycobacterium avium is a human pathogen which may cause either chronic or disseminated disease and the organism exhibits a slow rate of growth. This study provides information on the growth rate of the organism in chronically infected mice and its maximal growth rate in vitro. M. avium was grown in continuous culture, limited for nitrogen with 0.5 mM ammonium chloride and dilution rates that ranged from 0.054 to 0.153 h-1. The steady-state concentration of ammonia nitrogen and M. avium cells for each dilution rate were determined. The bacterial saturation constant for growth-limiting ammonia was 0.29 mM (4 micrograms nitrogen/ml) and, from this, the maximal growth rate for M. avium was estimated to be 0.206 h-1 or a doubling time of 3.4 h. BALB/c mice were infected intravenously with 3 x 10(6) colony-forming units and a chronic infection resulted, typical of virulent M. avium strains. During a period of 3 months, the number of mycobacteria remained constant in the lungs, but increased 30-fold and 8,900-fold, respectively, in the spleen and mesenteric lymph nodes. The latter increase appeared to be due to proliferation in situ. The generation time of M. avium in the mesenteric lymph nodes was estimated to be 7 days.
Selection of sampling rate for digital control of aircrafts
Katz, P.; Powell, J. D.
1974-01-01
The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.
Quinn, T Alexander; Kohl, Peter
2016-12-01
Mechanical stimulation (MS) represents a readily available, non-invasive means of pacing the asystolic or bradycardic heart in patients, but benefits of MS at higher heart rates are unclear. Our aim was to assess the maximum rate and sustainability of excitation by MS vs. electrical stimulation (ES) in the isolated heart under normal physiological conditions. Trains of local MS or ES at rates exceeding intrinsic sinus rhythm (overdrive pacing; lowest pacing rates 2.5±0.5 Hz) were applied to the same mid-left ventricular free-wall site on the epicardium of Langendorff-perfused rabbit hearts. Stimulation rates were progressively increased, with a recovery period of normal sinus rhythm between each stimulation period. Trains of MS caused repeated focal ventricular excitation from the site of stimulation. The maximum rate at which MS achieved 1:1 capture was lower than during ES (4.2±0.2 vs. 5.9±0.2 Hz, respectively). At all overdrive pacing rates for which repetitive MS was possible, 1:1 capture was reversibly lost after a finite number of cycles, even though same-site capture by ES remained possible. The number of MS cycles until loss of capture decreased with rising stimulation rate. If interspersed with ES, the number of MS to failure of capture was lower than for MS only. In this study, we demonstrate that the maximum pacing rate at which MS can be sustained is lower than that for same-site ES in isolated heart, and that, in contrast to ES, the sustainability of successful 1:1 capture by MS is limited. The mechanism(s) of differences in MS vs. ES pacing ability, potentially important for emergency heart rhythm management, are currently unknown, thus warranting further investigation. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.
Maximum Rate of Growth of Enstrophy in Solutions of the Fractional Burgers Equation
Yun, Dongfang
2016-01-01
This investigation is a part of a research program aiming to characterize the extreme behavior possible in hydrodynamic models by probing the sharpness of estimates on the growth of certain fundamental quantities. We consider here the rate of growth of the classical and fractional enstrophy in the fractional Burgers equation in the subcritical, critical and supercritical regime. First, we obtain estimates on these rates of growth and then show that these estimates are sharp up to numerical prefactors. In particular, we conclude that the power-law dependence of the enstrophy rate of growth on the fractional dissipation exponent has the same global form in the subcritical, critical and parts of the supercritical regime. This is done by numerically solving suitably defined constrained maximization problems and then demonstrating that for different values of the fractional dissipation exponent the obtained maximizers saturate the upper bounds in the estimates as the enstrophy increases. In addition, nontrivial be...
Compressive Sampling of EEG Signals with Finite Rate of Innovation
Poh Kok-Kiong
2010-01-01
Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.
Riisgård, Hans Ulrik; Larsen, Poul Scheel; Pleissner, Daniel
2014-01-01
rate (F, l h-1), W (g), and L (mm) as described by the equations: FW = aWb and FL = cLd, respectively. This is done by using available and new experimental laboratory data on M. edulis obtained by members of the same research team using different methods and controlled diets of cultivated algal cells...
Maximum organic loading rate for the single-stage wet anaerobic digestion of food waste.
Nagao, Norio; Tajima, Nobuyuki; Kawai, Minako; Niwa, Chiaki; Kurosawa, Norio; Matsuyama, Tatsushi; Yusoff, Fatimah Md; Toda, Tatsuki
2012-08-01
Anaerobic digestion of food waste was conducted at high OLR from 3.7 to 12.9 kg-VS m(-3) day(-1) for 225 days. Periods without organic loading were arranged between the each loading period. Stable operation at an OLR of 9.2 kg-VS (15.0 kg-COD) m(-3) day(-1) was achieved with a high VS reduction (91.8%) and high methane yield (455 mL g-VS-1). The cell density increased in the periods without organic loading, and reached to 10.9×10(10) cells mL(-1) on day 187, which was around 15 times higher than that of the seed sludge. There was a significant correlation between OLR and saturated TSS in the sludge (y=17.3e(0.1679×), r(2)=0.996, P<0.05). A theoretical maximum OLR of 10.5 kg-VS (17.0 kg-COD) m(-3) day(-1) was obtained for mesophilic single-stage wet anaerobic digestion that is able to maintain a stable operation with high methane yield and VS reduction.
Radon exhalation rates from some soil samples of Kharar, Punjab
Mehta, Vimal [Deptt of Physics, M. M. University, Mullana (Ambala)-133 207 (India); Deptt of Physics, Punjabi University, Patiala- 147 001 (India); Singh, Tejinder Pal, E-mail: tejinders03@gmail.com [Deptt of Physics, S.A. Jain (P.G.) College, Ambala City- 134 003 (India); Chauhan, R. P. [Deptt of Physics, National Institute of Technology, Kurukshetra- 136 119 (India); Mudahar, G. S. [Deptt of Physics, Punjabi University, Patiala- 147 001 (India)
2015-08-28
Radon and its progeny are major contributors in the radiation dose received by general population of the world. Because radon is a noble gas, a large portion of it is free to migrate away from radium. The primary sources of radon in the houses are soils and rocks source emanations, emanation from building materials, and entry of radon into a structure from outdoor air. Keeping this in mind the study of radon exhalation rate from some soil samples of the Kharar, Punjab has been carried out using Can Technique. The equilibrium radon concentration in various soil samples of Kharar area of district Mohali varied from 12.7 Bqm{sup −3} to 82.9 Bqm{sup −3} with an average of 37.5 ± 27.0 Bqm{sup −3}. The radon mass exhalation rates from the soil samples varied from 0.45 to 2.9 mBq/kg/h with an average of 1.4 ± 0.9 mBq/kg/h and radon surface exhalation rates varied from 10.4 to 67.2 mBq/m{sup 2}/h with an average of 30.6 ± 21.8 mBq/m{sup 2}/h. The radon mass and surface exhalation rates of the soil samples of Kharar, Punjab were lower than that of the world wide average.
DNA barcoding: error rates based on comprehensive sampling.
Christopher P Meyer
2005-12-01
Full Text Available DNA barcoding has attracted attention with promises to aid in species identification and discovery; however, few well-sampled datasets are available to test its performance. We provide the first examination of barcoding performance in a comprehensively sampled, diverse group (cypraeid marine gastropods, or cowries. We utilize previous methods for testing performance and employ a novel phylogenetic approach to calculate intraspecific variation and interspecific divergence. Error rates are estimated for (1 identifying samples against a well-characterized phylogeny, and (2 assisting in species discovery for partially known groups. We find that the lowest overall error for species identification is 4%. In contrast, barcoding performs poorly in incompletely sampled groups. Here, species delineation relies on the use of thresholds, set to differentiate between intraspecific variation and interspecific divergence. Whereas proponents envision a "barcoding gap" between the two, we find substantial overlap, leading to minimal error rates of approximately 17% in cowries. Moreover, error rates double if only traditionally recognized species are analyzed. Thus, DNA barcoding holds promise for identification in taxonomically well-understood and thoroughly sampled clades. However, the use of thresholds does not bode well for delineating closely related species in taxonomically understudied groups. The promise of barcoding will be realized only if based on solid taxonomic foundations.
FPGA realization of Farrow structure for sampling rate change
Marković Bogdan
2016-01-01
Full Text Available In numerous implementations of modern telecommunications and digital audio systems there is a need for sampling rate change of the system input signal. When the relation between signal input and output sampling frequencies is a fraction of two large integer numbers, Lagrange interpolation based on Farrow structure can be used for the efficient realization of the resample block. This paper highlights efficient realization and estimation of necessary resources for polynomial cubic Lagrange interpolation in the case of the demand for the signal sampling rate change with the factor 160/147 on Field-Programmable Gate Array architecture (FPGA. [Projekat Ministarstva nauke Republike Srbije, br. TR-32023 i br. TR-32028
Effects of systematic sampling on satellite estimates of deforestation rates
Steininger, M K; Godoy, F; Harper, G, E-mail: msteininger@conservation.or [Center for Applied Biodiversity Science-Conservation International, 2011 Crystal Drive Suite 500, Arlington, VA 22202 (United States)
2009-09-15
Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the
Validity of heart rate based nomogram fors estimation of maximum oxygen uptake in Indian population.
Kumar, S Krishna; Khare, P; Jaryal, A K; Talwar, A
2012-01-01
Maximal oxygen uptake (VO2max) during a graded maximal exercise test is the objective method to assess cardiorespiratory fitness. Maximal oxygen uptake testing is limited to only a few laboratories as it requires trained personnel and strenuous effort by the subject. At the population level, submaximal tests have been developed to derive VO2max indirectly based on heart rate based nomograms or it can be calculated using anthropometric measures. These heart rate based predicted standards have been developed for western population and are used routinely to predict VO2max in Indian population. In the present study VO2max was directly measured by maximal exercise test using a bicycle ergometer and was compared with VO2max derived by recovery heart rate in Queen's College step test (QCST) (PVO2max I) and with VO2max derived from Wasserman equation based on anthropometric parameters and age (PVO2max II) in a well defined age group of healthy male adults from New Delhi. The values of directly measured VO2max showed no significant correlation either with the estimated VO2max with QCST or with VO2max predicted by Wasserman equation. Bland and Altman method of approach for limit of agreement between VO2max and PVO2max I or PVO2max II revealed that the limits of agreement between directly measured VO2max and PVO2max I or PVO2max II was large indicating inapplicability of prediction equations of western population in the population under study. Thus it is evident that there is an urgent need to develop nomogram for Indian population, may be even for different ethnic sub-population in the country.
Longitudinal Examination of Age-Predicted Symptom-Limited Exercise Maximum Heart Rate
Zhu, Na; Suarez, Jose; Sidney, Steve; Sternfeld, Barbara; Schreiner, Pamela J.; Carnethon, Mercedes R.; Lewis, Cora E.; Crow, Richard S.; Bouchard, Claude; Haskell, William; Jacobs, David R.
2010-01-01
Purpose To estimate the association of age with maximal heart rate (MHR). Methods Data were obtained in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Participants were black and white men and women aged 18-30 in 1985-86 (year 0). A symptom-limited maximal graded exercise test was completed at years 0, 7, and 20 by 4969, 2583, and 2870 participants, respectively. After exclusion 9622 eligible tests remained. Results In all 9622 tests, estimated MHR (eMHR, beats/minute) had a quadratic relation to age in the age range 18 to 50 years, eMHR=179+0.29*age-0.011*age2. The age-MHR association was approximately linear in the restricted age ranges of consecutive tests. In 2215 people who completed both year 0 and 7 tests (age range 18 to 37), eMHR=189–0.35*age; and in 1574 people who completed both year 7 and 20 tests (age range 25 to 50), eMHR=199–0.63*age. In the lowest baseline BMI quartile, the rate of decline was 0.20 beats/minute/year between years 0-7 and 0.51 beats/minute/year between years 7-20; while in the highest baseline BMI quartile there was a linear rate of decline of approximately 0.7 beats/minute/year over the full age of 18 to 50 years. Conclusion Clinicians making exercise prescriptions should be aware that the loss of symptom-limited MHR is much slower at young adulthood and more pronounced in later adulthood. In particular, MHR loss is very slow in those with lowest BMI below age 40. PMID:20639723
Terrestrial Planet Occurrence Rates for the Kepler GK Dwarf Sample
Burke, Christopher J; Mullally, F; Seader, Shawn; Huber, Daniel; Rowe, Jason F; Coughlin, Jeffrey L; Thompson, Susan E; Catanzarite, Joseph; Clarke, Bruce D; Morton, Timothy D; Caldwell, Douglas A; Bryson, Stephen T; Haas, Michael R; Batalha, Natalie M; Jenkins, Jon M; Tenenbaum, Peter; Twicken, Joseph D; Li, Jie; Quintana, Elisa; Barclay, Thomas; Henze, Christopher E; Borucki, William J; Howell, Steve B; Still, Martin
2015-01-01
We measure planet occurrence rates using the planet candidates discovered by the Q1-Q16 Kepler pipeline search. This study examines planet occurrence rates for the Kepler GK dwarf target sample for planet radii, 0.75
Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling
Ballal, Tarig
2014-09-01
In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
with an exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model...... are used for parameter estimation. The data is log-transformed such that a linear model can be applied. The transformation changes the variance structure, and hence an OD-dependent variance is implemented in the model. The autocorrelation in the data is demonstrated, and a correlation model...... that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...
Efficient calculation of rate constants: Downhill versus uphill sampling
Klenin, Konstantin V.
2014-08-01
The classical transition state theory (TST), together with the notion of transmission coefficient, provides a useful tool for calculation of rate constants for rare events. However, in complex biomolecular reactions, such as protein folding, it is difficult to find a good reaction coordinate, so the transition state is ill-defined. In this case, other approaches are more popular, such as the transition interface sampling (TIS) and the forward flux sampling (FFS). Here, we show that the algorithms developed in the frames of TIS and FFS can be successfully applied, after a modification, for calculation of the transmission coefficient. The new procedure (which we call "downhill sampling") is more efficient in comparison with the traditional TIS and FFS ("uphill sampling") even if the reaction coordinate is bad. We also propose a new computational scheme that combines the advantages of TST, TIS, and FFS.
Two dimensional eye tracking: Sampling rate of forcing function
Hornseth, J. P.; Monk, D. L.; Porterfield, J. L.; Mcmurry, R. L.
1978-01-01
A study was conducted to determine the minimum update rate of a forcing function display required for the operator to approximate the tracking performance obtained on a continuous display. In this study, frequency analysis was used to determine whether there was an associated change in the transfer function characteristics of the operator. It was expected that as the forcing function display update rate was reduced, from 120 to 15 samples per second, the operator's response to the high frequency components of the forcing function would show a decrease in gain, an increase in phase lag, and a decrease in coherence.
Snelling, Edward P; Seymour, Roger S; Matthews, Philip G D; Runciman, Sue; White, Craig R
2011-10-01
The hemimetabolous migratory locust Locusta migratoria progresses through five instars to the adult, increasing in size from 0.02 to 0.95 g, a 45-fold change. Hopping locomotion occurs at all life stages and is supported by aerobic metabolism and provision of oxygen through the tracheal system. This allometric study investigates the effect of body mass (Mb) on oxygen consumption rate (MO2, μmol h(-1)) to establish resting metabolic rate (MRO2), maximum metabolic rate during hopping (MMO2) and maximum metabolic rate of the hopping muscles (MMO2,hop) in first instar, third instar, fifth instar and adult locusts. Oxygen consumption rates increased throughout development according to the allometric equations MRO2=30.1Mb(0.83±0.02), MMO2=155Mb(1.01±0.02), MMO2,hop=120Mb(1.07±0.02) and, if adults are excluded, MMO2,juv=136Mb(0.97±0.02) and MMO2,juv,hop=103Mb(1.02±0.02). Increasing body mass by 20-45% with attached weights did not increase mass-specific MMO2 significantly at any life stage, although mean mass-specific hopping MO2 was slightly higher (ca. 8%) when juvenile data were pooled. The allometric exponents for all measures of metabolic rate are much greater than 0.75, and therefore do not support West, Brown and Enquist’s optimised fractal network model, which predicts that metabolism scales with a 3⁄4-power exponent owing to limitations in the rate at which resources can be transported within the body.
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2012-01-01
We analyze the relationship between maximum cluster mass, M_max, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H2) and star formation rate (Sigma_SFR) in the flocculent galaxy M33, using published gas data and a catalog of more than 600 young star clusters in its disk. By comparing the radial distributions of gas and most massive cluster masses, we find that M_max is proportional to Sigma_gas^4.7, M_max is proportional Sigma_H2^1.3, and M_max is proportional to Sigma_SFR^1.0. We rule out that these correlations result from the size of sample; hence, the change of the maximum cluster mass must be due to physical causes.
Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert
2014-02-01
The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.
Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier
2011-10-01
Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/
Kruse, Marcelo Lapa; Kruse, José Cláudio Lupi; Leiria, Tiago Luiz Luz; Pires, Leonardo Martins; Gensas, Caroline Saltz; Gomes, Daniel Garcia; Boris, Douglas; Mantovani, Augusto; Lima, Gustavo Glotz de
2014-12-01
Occurrences of asymptomatic atrial fibrillation (AF) are common. It is important to identify AF because it increases morbidity and mortality. 24-hour Holter has been used to detect paroxysmal AF (PAF). The objective of this study was to investigate the relationship between occurrence of PAF in 24-hour Holter and the symptoms of the population studied. Cross-sectional study conducted at a cardiology hospital. 11,321 consecutive 24-hour Holter tests performed at a referral service were analyzed. Patients with pacemakers or with AF throughout the recording were excluded. There were 75 tests (0.67%) with PAF. The mean age was 67 ± 13 years and 45% were female. The heart rate (HR) over the 24 hours was a minimum of 45 ± 8 bpm, mean of 74 ± 17 bpm and maximum of 151 ± 32 bpm. Among the tests showing PAF, only 26% had symptoms. The only factor tested that showed a correlation with symptomatic AF was maximum HR (165 ± 34 versus 147 ± 30 bpm) (P = 0.03). Use of beta blockers had a protective effect against occurrence of PAF symptoms (odds ratio: 0.24, P = 0.031). PAF is a rare event in 24-hour Holter. The maximum HR during the 24 hours was the only factor correlated with symptomatic AF, and use of beta blockers had a protective effect against AF symptom occurrence.
Conceptual Implementation of Sample Rate Convertors for DACs
GRAUR, A.
2010-05-01
Full Text Available One of most common and difficult challenge when creating a single SoC with digital (subsections is caused by the various master clock (MCLK frequencies that each individual IC had originally. There are several methods to solve this, but when constraint by price and power consumption, the design engineers must find the optimum one. The sample rate converters (SRC are an example of solution that can simplify the architecture in some of these cases. However, even for the SRCs themselves, we need to come up with novel and efficient architectures. This paper presents such an example from mobile phones chips on how to successfully mix on the same silicon, an audio sigma-delta DAC which should support all the standard audio rates using a 13MHz MCLK frequency imposed by the RF section incorporated inside the same chip. The document will go from showing the top-level digital signal processing down to the actual hardware implementation.
Karia Ritesh M
2012-04-01
Full Text Available Objective: Objectives of this study is to study effect of smoking on Peak Expiratory Flow Rate and Maximum Voluntary Ventilation in apparently healthy tobacco smokers and non-smokers and to compare the result of both the studies to assess the effects of smoking Method: The present study was carried out by computerized software of Pulmonary Function Test named ‘Spiro Excel’ on 50 non-smokers and 50 smokers. Smokers are divided in three gropus. Full series of test take 4 to 5 minutes. Tests were compared in the both smokers and non-smokers group by the ‘unpaired t test’. Statistical significance was indicated by ‘p’ value < 0.05. Results: From the result it is found that actual value of Peak Expiratory Flow Rate and Maximum Voluntary Ventilation are significantly lower in all smokers group than non-smokers. The difference of actual mean value is increases as the degree of smoking increases. [National J of Med Res 2012; 2(2.000: 191-193
Siegler, Jason C; Marshall, Paul W M; Raftry, Sean; Brooks, Cristy; Dowswell, Ben; Romero, Rick; Green, Simon
2013-12-01
The purpose of this investigation was to assess the influence of sodium bicarbonate supplementation on maximal force production, rate of force development (RFD), and muscle recruitment during repeated bouts of high-intensity cycling. Ten male and female (n = 10) subjects completed two fixed-cadence, high-intensity cycling trials. Each trial consisted of a series of 30-s efforts at 120% peak power output (maximum graded test) that were interspersed with 30-s recovery periods until task failure. Prior to each trial, subjects consumed 0.3 g/kg sodium bicarbonate (ALK) or placebo (PLA). Maximal voluntary contractions were performed immediately after each 30-s effort. Maximal force (F max) was calculated as the greatest force recorded over a 25-ms period throughout the entire contraction duration while maximal RFD (RFD max) was calculated as the greatest 10-ms average slope throughout that same contraction. F max declined similarly in both the ALK and PLA conditions, with baseline values (ALK: 1,226 ± 393 N; PLA: 1,222 ± 369 N) declining nearly 295 ± 54 N [95% confidence interval (CI) = 84-508 N; P force vs. maximum rate of force development during a whole body fatiguing task.
Larson, Eric D.; St. Clair, Joshua R.; Sumner, Whitney A.; Bannister, Roger A.; Proenza, Cathy
2013-01-01
An inexorable decline in maximum heart rate (mHR) progressively limits human aerobic capacity with advancing age. This decrease in mHR results from an age-dependent reduction in “intrinsic heart rate” (iHR), which is measured during autonomic blockade. The reduced iHR indicates, by definition, that pacemaker function of the sinoatrial node is compromised during aging. However, little is known about the properties of pacemaker myocytes in the aged sinoatrial node. Here, we show that depressed excitability of individual sinoatrial node myocytes (SAMs) contributes to reductions in heart rate with advancing age. We found that age-dependent declines in mHR and iHR in ECG recordings from mice were paralleled by declines in spontaneous action potential (AP) firing rates (FRs) in patch-clamp recordings from acutely isolated SAMs. The slower FR of aged SAMs resulted from changes in the AP waveform that were limited to hyperpolarization of the maximum diastolic potential and slowing of the early part of the diastolic depolarization. These AP waveform changes were associated with cellular hypertrophy, reduced current densities for L- and T-type Ca2+ currents and the “funny current” (If), and a hyperpolarizing shift in the voltage dependence of If. The age-dependent reduction in sinoatrial node function was not associated with changes in β-adrenergic responsiveness, which was preserved during aging for heart rate, SAM FR, L- and T-type Ca2+ currents, and If. Our results indicate that depressed excitability of individual SAMs due to altered ion channel activity contributes to the decline in mHR, and thus aerobic capacity, during normal aging. PMID:24128759
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Supernova rates from the SUDARE VST-Omegacam search II. Rates in a galaxy sample
Botticella, M T; Greggio, L; Pignata, G; Della Valle, M; Grado, A; Limatola, L; Baruffolo, A; Benetti, S; Bufano, F; Capaccioli, M; Cascone, E; Covone, G; De Cicco, D; Falocco, S; Haeussler, B; Harutyunyan, V; Jarvis, M; Marchetti, L; Napolitano, N R; Paolillo, M; Pastorello, A; Radovich, M; Schipani, P; Tomasella, L; Turatto, M; Vaccari, M
2016-01-01
This is the second paper of a series in which we present measurements of the Supernova (SN) rates from the SUDARE survey. In this paper, we study the trend of the SN rates with the intrinsic colours, the star formation activity and the mass of the parent galaxies. We have considered a sample of about 130000 galaxies and a SN sample of about 50 events. We found that the SN Ia rate per unit mass is higher by a factor of six in the star-forming galaxies with respect to the passive galaxies. The SN Ia rate per unit mass is also higher in the less massive galaxies that are also younger. These results suggest a distribution of the delay times (DTD) less populated at long delay times than at short delays. The CC SN rate per unit mass is proportional to both the sSFR and the galaxy mass. The trends of the Type Ia and CC SN rates as a function of the sSFR and the galaxy mass that we observed from SUDARE data are in agreement with literature results at different redshifts. The expected number of SNe Ia is in agreement ...
Software Radio Sampling Rate Selection, Design and Synchronization
Venosa, Elettra; Palmieri, Francesco A N
2012-01-01
Software Radio represents the future of communication devices. By moving a radio's hardware functionalities into software, SWR promises to change the communication devices creating radios that, built on DSP based hardware platforms, are multiservice, multiband, reconfigurable and reprogrammable. This book describes the design of Software Radio (SWR). Rather than providing an overview of digital signal processing and communications, this book focuses on topics which are crucial in the design and development of a SWR, explaining them in a very simple, yet precise manner, giving simulation results that confirm the effectiveness of the proposed design. Readers will gain in-depth knowledge of key issues so they can actually implement a SWR. Specifically the book addresses the following issues: proper low-sampling rate selection in the multi-band received signal scenario, architecture design for both software radio receiver and transmitter devices and radio synchronization. Addresses very precisely the most imp...
Increasing fMRI sampling rate improves Granger causality estimates.
Fa-Hsuan Lin
Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.
Space Weathering Rates in Lunar and Itokawa Samples
Keller, L. P.; Berger, E. L.
2017-01-01
Space weathering alters the chemistry, microstructure, and spectral proper-ties of grains on the surfaces of airless bodies by two major processes: micrometeorite impacts and solar wind interactions. Investigating the nature of space weathering processes both in returned samples and in remote sensing observations provides information fundamental to understanding the evolution of airless body regoliths, improving our ability to determine the surface composition of asteroids, and linking meteorites to specific asteroidal parent bodies. Despite decades of research into space weathering processes and their effects, we still know very little about weathering rates. For example, what is the timescale to alter the reflectance spectrum of an ordinary chondrite meteorite to resemble the overall spectral shape and slope from an S-type asteroid? One approach to answering this question has been to determine ages of asteroid families by dynamical modeling and determine the spectral proper-ties of the daughter fragments. However, large differences exist between inferred space weathering rates and timescales derived from laboratory experiments, analysis of asteroid family spectra and the space weathering styles; estimated timescales range from 5000 years up to 108 years. Vernazza et al. concluded that solar wind interactions dominate asteroid space weathering on rapid timescales of 10(exp 4)-10(exp 6) years. Shestopalov et al. suggested that impact-gardening of regolith particles and asteroid resurfacing counteract the rapid progress of solar wind optical maturation of asteroid surfaces and proposed a space weathering timescale of 10(exp 5)-10(exp 6) years.
Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N
2014-06-01
Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-02-23
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model.
Luis Eduardo Cruz-Martínez
2014-10-01
Full Text Available Background. The formulas to predict maximum heart rate have been used for many years in different populations. Objective. To verify the significance and the association of formulas of Tanaka and 220-age when compared to real maximum heart rate. Materials and methods. 30 subjects -22 men, 8 women- between 18 and 30 years of age were evaluated on a cycle ergometer and their real MHR values were statistically compared with the values of formulas currently used to predict MHR. Results. The results demonstrate that both Tanaka p=0.0026 and 220-age p=0.000003 do not predict real MHR, nor does a linear association exist between them. Conclusions. Due to the overestimation with respect to real MHR value that these formulas make, we suggest a correction of 6 bpm to the final result. This value represents the median of the difference between the Tanaka value and the real MHR. Both Tanaka (r=0.272 and 220-age (r=0.276 are not adequate predictors of MHR during exercise at the elevation of Bogotá in subjects of 18 to 30 years of age, although more study with a larger sample size is suggested.
A Novel Dual-Rate Sampling Switched-Capacitor Configuration and Its Application
YU Qi; YANG Mohua; CHENG Yu; WANG Xiangzhan; LIU Changxiao; LAN Jialong
2003-01-01
In order to realize accurate bilinear transformation from s- to z-domain, a novel switched- capacitor configuration is proposed in the light of principles of dual-rate sampling and charge conservation, which has also been used for building a 5th-order elliptic lowpass filter. The filter is simulated and measured in typical 0.34 μm/3.3 V Si CMOS process models, special full differential operational amplifiers and CMOS transfer gate switches, which achieves 80 MHz sampling rate, 17.8MHz cutoff frequency, 0.052 dB maximum passband ripple, 42.1 dB minimum stopband attenuation and 74 mW quiescent power dissipation. At the same time, the dual-rate sampling topology breaks the traditional restrictions of filter introduced by unit-gain bandwidth and slew rate of operational amplifiers and also improves effectively their performances in high-frequency applications. It has been applied for the design of an anti-alias filter in analog front-end of video decoder IC with 15 MHz signal frequency yet.
2010-07-01
... PREPARING TOMORROW'S TEACHERS TO USE TECHNOLOGY § 614.6 What is the maximum indirect cost rate for all... requirements; or (3) Charged by the grantee to another Federal award. (Authority: 20 U.S.C. 6832)...
Supernova rates from the SUDARE VST-Omegacam search II. Rates in a galaxy sample
Botticella, M. T.; Cappellaro, E.; Greggio, L.; Pignata, G.; Della Valle, M.; Grado, A.; Limatola, L.; Baruffolo, A.; Benetti, S.; Bufano, F.; Capaccioli, M.; Cascone, E.; Covone, G.; De Cicco, D.; Falocco, S.; Haeussler, B.; Harutyunyan, V.; Jarvis, M.; Marchetti, L.; Napolitano, N. R.; Paolillo, M.; Pastorello, A.; Radovich, M.; Schipani, P.; Tomasella, L.; Turatto, M.; Vaccari, M.
2017-02-01
Aims: This is the second paper of a series in which we present measurements of the supernova (SN) rates from the SUDARE survey. The aim of this survey is to constrain the core collapse (CC) and Type Ia SN progenitors by analysing the dependence of their explosion rate on the properties of the parent stellar population averaging over a population of galaxies with different ages in a cosmic volume and in a galaxy sample. In this paper, we study the trend of the SN rates with the intrinsic colours, the star formation activity and the masses of the parent galaxies. To constrain the SN progenitors we compare the observed rates with model predictions assuming four progenitor models for SNe Ia with different distribution functions of the time intervals between the formation of the progenitor and the explosion, and a mass range of 8-40 M⊙ for CC SN progenitors. Methods: We considered a galaxy sample of approximately 130 000 galaxies and a SN sample of approximately 50 events. The wealth of photometric information for our galaxy sample allows us to apply the spectral energy distribution (SED) fitting technique to estimate the intrinsic rest frame colours, the stellar mass and star formation rate (SFR) for each galaxy in the sample. The galaxies have been separated into star-forming and quiescent galaxies, exploiting both the rest frame U-V vs. V-J colour-colour diagram and the best fit values of the specific star formation rate (sSFR) from the SED fitting. Results: We found that the SN Ia rate per unit mass is higher by a factor of six in the star-forming galaxies with respect to the passive galaxies, identified as such both on the U-V vs. V-J colour-colour diagram and for their sSFR. The SN Ia rate per unit mass is also higher in the less massive galaxies that are also younger. These results suggest a distribution of the delay times (DTD) less populated at long delay times than at short delays. The CC SN rate per unit mass is proportional to both the sSFR and the galaxy
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Sada, H
1978-10-01
Effects of phentolamine (13.3, 26.5 and 53.0 micron), alprenolol (3.5, 7.0 and 17.5 micron) and prenylamine (2.4, 4.8 and 11.9 micron) on the transmembrane potential were studied in isolated guinea-pig papillary muscles, superfused with Tyrode's solution. 1. Phentolamine, alprenolol and prenylamine reduced the maximum rate of rise of action potential (.Vmax) dose-dependently. Higher concentrations of phentolamine and prenylamine caused a loss of plateau in a majority of the preparations. Resting potential was not altered by any of the drugs. Readmittance of drug-free Tyrode's solution reversed these changes induced by 13.3 micron of phentolamine and all conconcentrations of alprenolol almost completely but those induced by higher concentrations of phentolamine and all concentrations of prenylamine only slightly. 2. .Vmax at steady state was increased with decreasing driving frequencies (0.5 and 0.25 Hz) and was decreased with increasing ones (2--5 Hz) in comparison with that at 1 Hz. Such changes were all exaggerated by the above drugs, particularly by prenylamine. 3. Prenylamine and, to a lesser degree, phentolamine and alprenolol delayed dose-dependently the recovery process of .Vmax in premature responses. 4. .Vmax in the first response after interruption of stimulation recovered toward the predrug value in the presence of the above three drugs. The time constants of recovery process ranged between 10.5 and 15.0s for phentolamine, between 4.5 and 15.5s for alprenolol. The time constant of the main component was estimated to be approximately 2s for the recovery process with prenylamine. 5. On the basis of the model recently proposed by Hondeghem and Katzung (1977), it is suggested that the drug molecules associate with the open sodium channels and dissociated slowly from the closed channels and that the inactivation parameter in the drug-associated channels is shifted in the hyperpolarizing direction.
Mazhar A. Memon
2016-04-01
Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.
Can compressed sensing beat the Nyquist sampling rate?
Yaroslavsky, L
2015-01-01
Data saving capability of "Compressed sensing (sampling)" in signal discretization is disputed and found to be far below the theoretical upper bound defined by the signal sparsity. On a simple and intuitive example, it is demonstrated that, in a realistic scenario for signals that are believed to be sparse, one can achieve a substantially larger saving than compressing sensing can. It is also shown that frequent assertions in the literature that "Compressed sensing" can beat the Nyquist sampling approach are misleading substitution of terms and are rooted in misinterpretation of the sampling theory.
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations.
AREA EFFICIENT FRACTIONAL SAMPLE RATE CONVERSION ARCHITECTURE FOR SOFTWARE DEFINED RADIOS
Latha Sahukar
2014-09-01
Full Text Available The modern software defined radios (SDRs use complex signal processing algorithms to realize efficient wireless communication schemes. Several such algorithms require a specific symbol to sample ratio to be maintained. In this context the fractional rate converter (FRC becomes a crucial block in the receiver part of SDR. The paper presents an area optimized dynamic FRC block, for low power SDR applications. The limitations of conventional cascaded interpolator and decimator architecture for FRC are also presented. Extending the SINC function interpolation based architecture; towards high area optimization and providing run time configuration with time register are presented. The area and speed analysis are carried with Xilinx FPGA synthesis tools. Only 15% area occupancy with maximum clock speed of 133 MHz are reported on Spartan-6 Lx45 Field Programmable Gate Array (FPGA.
Sérgio Luiz Gomes Antunes
2012-03-01
Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes
2012-03-01
Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate
Brunelli, Davide; Caione, Carlo
2015-01-01
.... Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate...
Distribution Function Estimation of the Timing Jitter in Sample Rate Converter
Vipan Kakkar
2010-04-01
Full Text Available The aim of digital sample rate conversion is to bring a digital audio signal from one sample frequency to another. The distortion of the audio signal introduced by the sample rate converter should be as low as possible. The generation of the output samples from the inputsamples may be performed by the application of various methods. In this paper, a new technique of digital sample-rate converter is proposed. We perform the analysis for distribution function estimation of the timing jitter in proposed digital sample rate converter.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
VANSTEENIS, HG; TULEN, JHM; MULDER, LJM
1994-01-01
This paper compares two methods to estimate heart rate variability spectra i.e., the spectrum of counts and the instantaneous heart rate spectrum. Contrary to Fourier techniques based on equidistant sampling of the interbeat intervals, the spectrum of counts of the instantaneous heart rate spectrum
VANSTEENIS, HG; TULEN, JHM; MULDER, LJM
This paper compares two methods to estimate heart rate variability spectra i.e., the spectrum of counts and the instantaneous heart rate spectrum. Contrary to Fourier techniques based on equidistant sampling of the interbeat intervals, the spectrum of counts of the instantaneous heart rate spectrum
Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.
2016-08-01
We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.
34 CFR 694.9 - What is the maximum indirect cost rate for an agency of a State or local government?
2010-07-01
... for an agency of a State or local government? Notwithstanding 34 CFR 75.560-75.562 and 34 CFR 80.22, the maximum indirect cost rate that an agency of a State or local government receiving funds under... a State or local government? 694.9 Section 694.9 Education Regulations of the Offices of...
Lee, Sang-Yong; Ortega, Antonio
2000-04-01
We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Young chicken and squab slaughter... INSPECTION REGULATIONS Operating Procedures § 381.67 Young chicken and squab slaughter inspection rate... inspector per minute under the traditional inspection procedure for the different young chicken and...
Yabuki, Yoshinori; Nagai, Takashi; Inao, Keiya; Ono, Junko; Aiko, Nobuyuki; Ohtsuka, Nobutoshi; Tanaka, Hitoshi; Tanimori, Shinji
2016-10-01
Laboratory experiments were performed to determine the sampling rates of pesticides for the polar organic chemical integrative samplers (POCIS) used in Japan. The concentrations of pesticides in aquatic environments were estimated from the accumulated amounts of pesticide on POCIS, and the effect of water temperature on the pesticide sampling rates was evaluated. The sampling rates of 48 pesticides at 18, 24, and 30 °C were obtained, and this study confirmed that increasing trend of sampling rates was resulted with increasing water temperature for many pesticides.
Gonzalez-Lopezlira, Rosa A. [On sabbatical leave from the Centro de Radioastronomia y Astrofisica, UNAM, Campus Morelia, Michoacan, C.P. 58089, Mexico. (Mexico); Pflamm-Altenburg, Jan; Kroupa, Pavel, E-mail: r.gonzalez@crya.unam.mx [Argelander Institut fuer Astronomie, Universitaet Bonn, Auf dem Huegel 71, D-53121 Bonn (Germany)
2013-06-20
We analyze the relationship between maximum cluster mass and surface densities of total gas ({Sigma}{sub gas}), molecular gas ({Sigma}{sub H{sub 2}}), neutral gas ({Sigma}{sub H{sub I}}), and star formation rate ({Sigma}{sub SFR}) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.4{+-}0.2}}, whereM{sub 3rd} is the median of the five most massive clusters. There is no correlation with{Sigma}{sub gas},{Sigma}{sub H2}, or{Sigma}{sub SFR}. For clusters younger than 10 Myr, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.6{+-}0.1}} and M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 0.5{+-}0.2}; there is no correlation with either {Sigma}{sub H{sub 2}} or{Sigma}{sub SFR}. The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 3.8{+-}0.3}, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub 2}{sup 1.2{+-}0.1}}, and M{sub 3rd}{proportional_to}{Sigma}{sub SFR}{sup 0.9{+-}0.1}. For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet
Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si
2014-10-24
Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.
Hubbard, S. M.; Coutts, D. S.; Matthews, W.; Guest, B.; Bain, H.
2015-12-01
In basins adjacent to continually active arcs, detrital zircon geochronology can be used to establish a high-resolution chronostratigraphic framework for deep-time strata. Large-nU-Pb geochronological datasets can yield a statistically significant signature from the youngest sub-population of detrital zircons, which we deduce from maximum depositional age (MDA) calculations. MDA is determined through numerous methods such as the mean age of three or more overlapping grain ages at 2σ error, favored in this analysis. Positive identification of the youngest detrital zircon population in a rock is the limiting factor on precision and resolution. The Campanian-Paleogene Nanaimo Group of B.C., Canada, was deposited in a forearc basin, outboard of the Coast Mountain Batholith. The record of a deep-water sediment-routing system is exhumed at Denman and Hornby islands; sandstone- and conglomerate- dominated strata compose a composite sedimentary unit 20 km across and 1.5 km thick, in strike section. Volcanic ashes are absent from the succession, which has been constrained biostratigraphically. Eleven detrital zircon samples are analyzed to define stratigraphic architecture and provide insight into sedimentation rates. Our dataset (n=3081) constrains the overall duration of channelization to ~18 Ma. A series of at least five distinct composite channel fills 3-6 km wide and 400-600 m thick are identified. The MDA of these units are statistically distinct and constrained to better than 3% precision. Sedimentation rates amongst the channel fills increase upward, from 60-100 m/Ma to >500 m/Ma. This is likely linked to the tendency of a slope channel system to be dominated by sediment bypass early in its evolution, and later dominated by aggradation as large-scale levees develop. Channel processes were not continuous, with the longest hiatus ~6 Ma. The large-n detrital zircon dataset provides unprecedented insight into long-term sediment routing, evidence for which is
Likelihood inference of non-constant diversification rates with incomplete taxon sampling.
Sebastian Höhna
Full Text Available Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family and thus to maximize diversity (diversified sampling. So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa. The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa. Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model. Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species. All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear
N. Alavizadeh
2017-01-01
Full Text Available ims: Apelin is an adipokine, which secreted from adipose tissue and has positive effects against the insulin resistance. The aim of this study was to investigate the effect of 8-week aerobic exercise on levels of apelin and insulin resistance index in sedentary men. Materials & Methods: In this semi-experimental study with controlled group pre/post-test design in 2015, 27 healthy sedentary men living in Mashhad City, Iran, were selected by convenience sampling method. They were divided into two groups; experimental group (n=14 and control group (n=13. In the trained group, the volunteers participated in 8 weeks aerobic exercise, 3 days/week (equivalent to 75-85% of maximum oxygen consumption for 60 minutes per session. The research variables were assessed before and after the intervention in both groups. The collected data were analyzed using SPSS 20 software using paired and independent sample T tests. Findings: 8-week aerobic exercise significantly decreased the weight, BMI and apelin, insulin and insulin resistance index levels and increased the maximum oxygen consumption in experimental group sedentary men (p<0.05. Moreover, there were significant differences in levels of FBS, insulin, apelin, insulin resistance index and maximum oxygen consumption between experimental and control groups (p<0.05. Conclusion: 8-week aerobic exercise reduces apelin levels and insulin resistance index in sedentary men.
Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E
2014-01-01
Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.
An error criterion for determining sampling rates in closed-loop control systems
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.
2010-05-01
One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives.
Low Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Tur, Ronen; Friedman, Zvi
2010-01-01
Signals comprised of a stream of short pulses appear in many applications including bio-imaging, radar, and ultrawideband communication. Recently, a new framework, referred to as finite rate of innovation, has paved the way to low rate sampling of such pulses by exploiting the fact that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing approaches are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams of pulses from a minimal number of samples. This extends previous work which assumes that the sampling kernel is an ideal low-pass filter. A compactly supported class of filters, satisfying the mathematical condition, is then introduced, leading to a sampling framework based on compactly supported kernels. We then exte...
Juan Bolea
2016-11-01
Full Text Available The purpose of this study is to characterize and attenuate the influence of mean heart rate (HR on nonlinear heart rate variability (HRV indices (correlation dimension, sample and approximate entropy as a consequence of being the HR the intrinsic sampling rate of HRV signal. This influence can notably alter nonlinear HRV indices and lead to biased information regarding autonomic nervous system (ANS modulation.First, a simulation study was carried out to characterize the dependence of nonlinear HRV indices on HR assuming similar ANS modulation. Second, two HR-correction approaches were proposed: one based on regression formulas and another one based on interpolating RR time series. Finally, standard and HR-corrected HRV indices were studied in a body position change database.The simulation study showed the HR-dependence of non-linear indices as a sampling rate effect, as well as the ability of the proposed HR-corrections to attenuate mean HR influence. Analysis in a body position changes database shows that correlation dimension was reduced around 21% in median values in standing with respect to supine position (p < 0.05, concomitant with a 28% increase in mean HR (p < 0.05. After HR-correction, correlation dimension decreased around 18% in standing with respect to supine position, being the decrease still significant. Sample and approximate entropy showed similar trends.HR-corrected nonlinear HRV indices could represent an improvement in their applicability as markers of ANS modulation when mean HR changes.
Lovell, Dale I; Cuneo, Ross; Gass, Greg C
2010-06-01
This study examined the effect of strength training (ST) and short-term detraining on maximum force and rate of force development (RFD) in previously sedentary, healthy older men. Twenty-four older men (70-80 years) were randomly assigned to a ST group (n = 12) and C group (control, n = 12). Training consisted of three sets of six to ten repetitions on an incline squat at 70-90% of one repetition maximum three times per week for 16 weeks followed by 4 weeks of detraining. Regional muscle mass was assessed before and after training by dual-energy X-ray absorptiometry. Training increased RFD, maximum bilateral isometric force, and force in 500 ms, upper leg muscle mass and strength above pre-training values (14, 25, 22, 7, 90%, respectively; P force and RFD of older men. However, older individuals may lose some neuromuscular performance after a period of short-term detraining and that resistance exercise should be performed on a regular basis to maintain training adaptations.
Thermomagnetic behavior of magnetic susceptibility – heating rate and sample size effects
Diana eJordanova
2016-01-01
Full Text Available Thermomagnetic analysis of magnetic susceptibility k(T was carried out for a number of natural powder materials from soils, baked clay and anthropogenic dust samples using fast (11oC/min and slow (6.5oC/min heating rates available in the furnace of Kappabridge KLY2 (Agico. Based on the additional data for mineralogy, grain size and magnetic properties of the studied samples, behaviour of k(T cycles and the observed differences in the curves for fast and slow heating rate are interpreted in terms of mineralogical transformations and Curie temperatures (Tc. The effect of different sample size is also explored, using large volume and small volume of powder material. It is found that soil samples show enhanced information on mineralogical transformations and appearance of new strongly magnetic phases when using fast heating rate and large sample size. This approach moves the transformation at higher temperature, but enhances the amplitude of the signal of newly created phase. Large sample size gives prevalence of the local micro- environment, created by evolving gases, released during transformations. The example from archeological brick reveals the effect of different sample sizes on the observed Curie temperatures on heating and cooling curves, when the magnetic carrier is substituted magnetite (Mn0.2Fe2.70O4. Large sample size leads to bigger differences in Tcs on heating and cooling, while small sample size results in similar Tcs for both heating rates.
The effect of sampling on estimates of lexical specificity and error rates.
Rowland, Caroline F; Fletcher, Sarah L
2006-11-01
Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
Passive sampling of perfluorinated chemicals in water: Flow rate effects on chemical uptake
Kaserzon, S.L.; Vermeirssen, E.L.M.; Hawker, D.W.; Kennedy, K.; Bentley, C.; Thompson, J.; Booij, K.; Mueller, J.F.
2013-01-01
A recently developed modified polar organic chemical integrative sampler (POCIS) provides a means for monitoring perfluorinated chemicals (PFCs) in water. However, changes in external flow rates may alter POCIS sampling behaviour and consequently affect estimated water concentrations of analytes. In
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Polyphase Filter Banks for Embedded Sample Rate Changes in Digital Radio Front-Ends
Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter
2011-01-01
This paper presents efficient processing engines for software-defined radio (SDR) front-ends. These engines, based on a polyphase channelizer, perform arbitrary sample-rate changes, frequency selection, and bandwidth control. This paper presents an M-path polyphase filter bank based on a modified N...... in an SDR front-end based on a polyphase channelizer. They can also be used for translation to and from arbitrary center frequencies that are unrelated to the output sample rates....
Malekifarsani, A; Skachek, M A
2009-10-01
shown that the concentrations of the following radionuclides are limited by solubility and precipitate around the waste and buffer: U, Np, Ra, Sm, Zr, Se, Tc, and Pd. The Sensitivity of maximum release rates in case precipitation shows that some nuclides such as Cs-135, Nb-94, Nb-93 m, Zr-93, Sn-126, Th-230, Pu-240, Pu-242, Pu-239, Cm-245, Am-243, Cm-245, U-233, Ac-227, Pb-210, Pa-231 and Th-229 are very little changed in case the maximum release rate from EBS corresponding to eliminate precipitation in buffer material. Some nuclides such as Se-79, Tc-99, Pd-107, Th-232, U-236, U-233, Ra-226, Np-237 U-235, U-234, and U-238 are virtually changed in the maximum release rate compared to case that taking account precipitation. In Sensitivity of maximum release rates in case to taking account stable isotopes (according to the table of inventory) there are only some nuclides with their stable isotopes in the vitrified waste. And calculation shows that Pd-107 and Se-79 are very increase in case eliminate stable isotope. The Sensitivity of maximum release rates in case retardation with sorption shows that some nuclides such as Pu-240, Pu-241, Pu-239, Cm-245, Am-241, Cm-246, and Am-243 are increased in some time in case maximum release rate from EBS corresponding to eliminate retardation in buffer material. Some nuclides such as U-235, U-233 and U-236 have a little decrease in case maximum release because their parents have short live and before decay to their daughter will released from the EBS. If the characteristic time taken for a nuclide to diffuse across the buffer exceeds its half-life, then the release rate of that nuclide from the EBS will be attenuated by radioactive decay. Thus, the retardation of the diffusion process due to sorption tends to reduce the release rates of short-lived nuclides more effectively than for the long-lived ones. For example, release rates of Pu-240, Cm-246 and Am-241, which are relatively short-lived and strongly sorbing, are very small
Application guide for source PM10 measurement with constant sampling rate
Farthing, W.E.; Dawes, S.S.
1989-05-01
The manual presents a method, Constant Sampling Rate (CSR), which allows determination of stationary source PM-10 emissions with hardware similar to that used for Methods 5 or 17. The operating principle of the method is to extract a multipoint sample so that errors due to spatial variation of particle size and anisokinetic sampling are kept within predetermined limits. The manual specifically addresses the use of the CSR methodology for determination of stationary source PM-10 emissions. Material presented in the manual includes calibration of sampling train components, pretest setup calculations, sample recovery, test data reduction, and routine equipment maintenance.
Contamination Rates of Three Urine-Sampling Methods to Assess Bacteriuria in Pregnant Women
Schneeberger, Caroline; van den Heuvel, Edwin R.; Erwich, Jan Jaap H. M.; Stolk, Ronald P.; Visser, Caroline E.; Geerlings, Suzanne E.
OBJECTIVE: To estimate and compare contamination rates of three different urine-sampling methods in pregnant women to assess bacteriuria. METHODS: In this cross-sectional study, 113 pregnant women collected three different midstream urine samples consecutively: morning (first void); midstream (void
Shaw, A; Takács, I; Pagilla, K R; Murthy, S
2013-10-15
The Monod equation is often used to describe biological treatment processes and is the foundation for many activated sludge models. The Monod equation includes a "half-saturation coefficient" to describe the effect of substrate limitations on the process rate and it is customary to consider this parameter to be a constant for a given system. The purpose of this study was to develop a methodology, and its use to show that the half-saturation coefficient for denitrification is not constant but is in fact a function of the maximum denitrification rate. A 4-step procedure is developed to investigate the dependency of half-saturation coefficients on the maximum rate and two different models are used to describe this dependency: (a) an empirical linear model and (b) a deterministic model based on Fick's law of diffusion. Both models are proved better for describing denitrification kinetics than assuming a fixed K(NO3) at low nitrate concentrations. The empirical model is more utilitarian whereas the model based on Fick's law has a fundamental basis that enables the intrinsic K(NO3) to be estimated. In this study data was analyzed from 56 denitrification rate tests and it was found that the extant K(NO3) varied between 0.07 mgN/L and 1.47 mgN/L (5th and 95th percentile respectively) with an average of 0.47 mgN/L. In contrast to this, the intrinsic K(NO3) estimated for the diffusion model was 0.01 mgN/L which indicates that the extant K(NO3) is greatly influenced by, and mostly describes, diffusion limitations.
Faraker, C A; Greenfield, J
2013-08-01
To investigate the sampling performance of individual cervical cytology practitioners using the transformation zone sampling rate (TZSR) as a performance indicator and to assess the impact of dedicated on site training for those identified with a low TZSR. The TZSR was calculated for all practitioners submitting ThinPrep(®) cervical cytology specimens to the Conquest laboratory between January 2010 and November 2011. After excluding those with less than 30 qualifying samples the 10th percentile of the TZSR was calculated. Practitioners with a TZSR below the 10th percentile were visited by a specialist cervical cytology screening facilitator after which the TZSR of these practitioners was closely monitored. After exclusions there were 175 practitioners who had collected 24 358 qualifying liquid-based cytology (LBC) samples. The average TZSR was 70% (range 12-96%). The 10th percentile was 44%; 18 scored below the 10th percentile. Failure to apply sufficient pressure when sampling was identified as the most common reason for a low TZSR. In some cases there was suspicion that the cervix was not always adequately visualized. Continuous monitoring after assessment identified improvement in the TZSRs of 13/18 practitioners. Identification of practitioners with low TZSRs compared with their peers allows these individuals to be selected for personalized observation and training by a specialist in cervical cytology which can lead to an improvement in TZSR. As previous studies show a significant correlation between the TZSR and the detection rate of cytological abnormality it is useful to investigate low TZSRs. © 2013 John Wiley & Sons Ltd.
2013-01-01
Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of tr...
Data-driven soft sensor design with multiple-rate sampled data
Lin, Bao; Recke, Bodil; Knudsen, Jørgen K.H.
2007-01-01
Multi-rate systems are common in industrial processes where quality measurements have slower sampling rate than other process variables. Since inter-sample information is desirable for effective quality control, different approaches have been reported to estimate the quality between samples...... are implemented to design quality soft sensors for cement kiln processes using data collected from a plant log system. Preliminary results reveal that the WPLS approach is able to provide accurate one-step-ahead prediction. The regularized data lifting technique predicts the product quality of cement kiln systems...
A NONPARAMETRIC PROCEDURE OF THE SAMPLE SIZE DETERMINATION FOR SURVIVAL RATE TEST
无
2000-01-01
Objective This paper proposes a nonparametric procedure of the sample size determination for survival rate test. Methods Using the classical asymptotic normal procedure yields the required homogenetic effective sample size and using the inverse operation with the prespecified value of the survival function of censoring times yields the required sample size. Results It is matched with the rate test for censored data, does not involve survival distributions, and reduces to its classical counterpart when there is no censoring. The observed power of the test coincides with the prescribed power under usual clinical conditions. Conclusion It can be used for planning survival studies of chronic diseases.
A Method of Multi-channel Data Acquisition with Adjustable Sampling Rate
Su Shujing
2013-09-01
Full Text Available Sampling rate of current signal acquisition systems are singular. Aiming at this shortcoming, a method of multi-channel data acquisition(DAQ with adjustable sampling rate is presented. The method realizes the cut-off frequency of anti-aliasing filter controlled by program with the help of switched-capacitor; by independently pulsing sampling signal of different ADCs, 16-channel sampling rate are adjustable within the range 50ksps, 25ksps, 10ksps, 5ksps, 1ksps. Theoretical analysis and experimental verification pointing at the proposed method are implemented: theoretical analysis shows that parameters of the filter meet the design requirements; experimental results show that cut-off frequency of the anti-aliasing filter matches variable sampling rate very well; choosing appropriate sampling rate according to the characteristics of the measured signal not only can well restore the measured signal, but also prevents system resources from waste. This method can meet needs of testing various signals with different frequency at the same time.
KAPTURE-2. A picosecond sampling system for individual THz pulses with high repetition rate
Müller, A.-S.
2017-01-01
This paper presents a novel data acquisition system for continuous sampling of ultra-short pulses generated by terahertz (THz) detectors. Karlsruhe Pulse Taking Ultra-fast Readout Electronics (KAPTURE) is able to digitize pulse shapes with a sampling time down to 3 ps and pulse repetition rates up to 500 MHz. KAPTURE has been integrated as a permanent diagnostic device at ANKA and is used for investigating the emitted coherent synchrotron radiation in the THz range. A second version of KAPTURE has been developed to improve the performance and flexibility. The new version offers a better sampling accuracy for a pulse repetition rate up to 2 GHz. The higher data rate produced by the sampling system is processed in real-time by a heterogeneous FPGA and GPU architecture operating up to 6.5 GB/s continuously. Results in accelerator physics will be reported and the new design of KAPTURE be discussed.
Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples
Harada, Koji; Sakai, Hideaki
In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.
Preksedis M. Ndomba
2008-01-01
Full Text Available This paper presents preliminary findings on the adequacy of one hydrological year sampling programme data in developing an excellent sediment rating curve. The study case is a 1DD1 subcatchment in the upstream of Pangani River Basin (PRB, located in the North Eastern part of Tanzania. 1DD1 is the major runoff-sediment contributing tributary to the downstream hydropower reservoir, the Nyumba Ya Mungu (NYM. In literature sediment rating curve method is known to underestimate the actual sediment load. In the case of developing countries long-term sediment sampling monitoring or conservation campaigns have been reported as unworkable options. Besides, to the best knowledge of the authors, to date there is no consensus on how to develop an excellent rating curve. Daily-midway and intermittent-cross section sediment samples from Depth Integrating sampler (D-74 were used to calibrate the subdaily automatic sediment pumping sampler (ISCO 6712 near bank point samples for developing the rating curve. Sediment load correction factors were derived from both statistical bias estimators and actual sediment load approaches. It should be noted that the ongoing study is guided by findings of other studies in the same catchment. For instance, long term sediment yield rate estimated based on reservoir survey validated the performance of the developed rating curve. The result suggests that excellent rating curve could be developed from one hydrological year sediment sampling programme data. This study has also found that uncorrected rating curve underestimates sediment load. The degreeof underestimation depends on the type of rating curve developed and data used.
Rosewarne, P J; Wilson, J M; Svendsen, J C
2016-01-01
Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology.
Locating the acoustic source in thin glass plate using low sampling rate data.
Hoseini Sabzevari, S Amir; Moavenian, Majid
2016-08-01
Acoustic source localization is an important step for structural health monitoring (SHM). There are many research studies dealing with localization based on high sampling rate data. In this paper, for the first time, acoustic source is localized on an isotropic plate using low sampling rate data. Previous studies have mainly used a cluster of specific sensors to easily record high sampling rate signals containing qualitative time domain features. This paper proposes a novel technique to localize the acoustic source on isotropic plates by simply implementing a combination of two simple electret microphones and Loci of k-Tuple Distances (LkTD) from the two sensors with low sampling rate data. In fact the method proposes substitution of previous methods based on solving the system of equations and increasing the number of sensors by implementing the selection of LkTD. Unlike most previous studies, estimation of time difference of arrival (TDOA) is based on the frequency properties of the signal rather than it's time properties. An experimental set-up is prepared and experiments are conducted to validate the proposed technique by prediction of the acoustic source location. The experimental results show that TDOA estimations based on low sampling rate data can produce more accurate predictions in comparison with previous studies. It is also shown that the selection of LkTD on the plate has noticeable effects on the performance of this technique. Copyright © 2016 Elsevier B.V. All rights reserved.
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
Ma, Jingxing; Mungoni, Lucy Jubeki; Verstraete, Willy; Carballa, Marta
2009-07-01
The maximum propionic acid (HPr) removal rate (R(HPr)) was investigated in two lab-scale Upflow Anaerobic Sludge Bed (UASB) reactors. Two feeding strategies were applied by modifying the hydraulic retention time (HRT) in the UASB(HRT) and the influent HPr concentration in the UASB(HPr), respectively. The experiment was divided into three main phases: phase 1, influent with only HPr; phase 2, HPr with macro-nutrients supplementation and phase 3, HPr with macro- and micro-nutrients supplementation. During phase 1, the maximum R(HPr) achieved was less than 3 g HPr-CODL(-1)d(-1) in both reactors. However, the subsequent supplementation of macro- and micro-nutrients during phases 2 and 3 allowed to increase the R(HPr) up to 18.1 and 32.8 g HPr-CODL(-1)d(-1), respectively, corresponding with an HRT of 0.5h in the UASB(HRT) and an influent HPr concentration of 10.5 g HPr-CODL(-1) in the UASB(HPr). Therefore, the high operational capacity of these reactor systems, specifically converting HPr with high throughput and high influent HPr level, was demonstrated. Moreover, the presence of macro- and micro-nutrients is clearly essential for stable and high HPr removal in anaerobic digestion.
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2013-01-01
We analyze the relationship between maximum cluster mass, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H_2), neutral gas (Sigma_HI) and star formation rate (Sigma_SFR) in the grand design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. We find for clusters older than 25 Myr that M_3rd, the median of the 5 most massive clusters, is proportional to Sigma_HI^0.4. There is no correlation with Sigma_gas, Sigma_H2, or Sigma_SFR. For clusters younger than 10 Myr, M_3rd is proportional to Sigma_HI^0.6, M_3rd is proportional to Sigma_gas^0.5; there is no correlation with either Sigma_H_2 or Sigma_SFR. The results could hardly be more different than those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but M_3rd is proportional to Sigma_g...
Li, Ying; Yang, Cunman; Bao, Yijun; Ma, Xueru; Lu, Guanghua; Li, Yi
2016-08-01
A modified polar organic chemical integrative sampler (POCIS) could provide a convenient way of monitoring perfluorinated chemicals (PFCs) in water. In the present study, the modified POCIS was calibrated to monitor PFCs. The effects of water temperature, pH, and dissolved organic matter (DOM) on the sampling rate (R s) of PFCs were evaluated with a static renewal system. During laboratory validation over a 14-day period, the uptake kinetics of PFCs was linear with the POCIS. DOM and water temperature slightly influenced POCIS uptake rates, which is in consistent with the theory for uptake into POCIS. Therefore, within a narrow span of DOM and water temperatures, it was unnecessary to adjust the R s value for POCIS. Laboratory experiments were conducted with water over pH ranges of 3, 7, and 9. The R s values declined significantly with pH increase for PFCs. Although pH affected the uptake of PFCs, the effect was less than twofold. Application of the R s value to analyze PFCs with POCIS deployed in the field provided similar concentrations obtained from grab samples.
Passive sampling of perfluorinated chemicals in water: flow rate effects on chemical uptake.
Kaserzon, Sarit L; Vermeirssen, Etiënne L M; Hawker, Darryl W; Kennedy, Karen; Bentley, Christie; Thompson, Jack; Booij, Kees; Mueller, Jochen F
2013-06-01
A recently developed modified polar organic chemical integrative sampler (POCIS) provides a means for monitoring perfluorinated chemicals (PFCs) in water. However, changes in external flow rates may alter POCIS sampling behaviour and consequently affect estimated water concentrations of analytes. In this work, uptake kinetics of selected PFCs, over 15 days, were investigated. A flow-through channel system was employed with spiked river water at flow rates between 0.02 and 0.34 m s(-1). PFC sampling rates (Rs) (0.09-0.29 L d(-1) depending on analyte and flow rate) increased from the lowest to highest flow rate employed for some PFCs (MW ≤ 464) but not for others (MW ≥ 500). Rs's for some of these smaller PFCs were increasingly less sensitive to flow rate as this increased within the range investigated. This device shows promise as a sampling tool to support monitoring efforts for PFCs in a range of flow rate conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Omer, Muhammad
2012-07-01
This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.
Zhang, Huaguang; Wang, Junyi; Wang, Zhanshan; Liang, Hongjing
2017-03-01
This paper investigates the problem of sampled-data synchronization for Markovian neural networks with generally incomplete transition rates. Different from traditional Markovian neural networks, each transition rate can be completely unknown or only its estimate value is known in this paper. Compared with most of existing Markovian neural networks, our model is more practical because the transition rates in Markovian processes are difficult to precisely acquire due to the limitations of equipment and the influence of uncertain factors. In addition, the time-dependent Lyapunov-Krasovskii functional is proposed to synchronize drive system and response system. By applying an extended Jensen's integral inequality and Wirtinger's inequality, new delay-dependent synchronization criteria are obtained, which fully utilize the upper bound of variable sampling interval and the sawtooth structure information of varying input delay. Moreover, the desired sampled-data controllers are obtained. Finally, two examples are provided to illustrate the effectiveness of the proposed method.
Męczykowska, Hanna; Kobylis, Paulina; Stepnowski, Piotr; Caban, Magda
2017-05-04
Passive sampling is one of the most efficient methods of monitoring pharmaceuticals in environmental water. The reliability of the process relies on a correctly performed calibration experiment and a well-defined sampling rate (Rs) for target analytes. Therefore, in this review the state-of-the-art methods of passive sampler calibration for the most popular pharmaceuticals: antibiotics, hormones, β-blockers and non-steroidal anti-inflammatory drugs (NSAIDs), along with the sampling rate variation, were presented. The advantages and difficulties in laboratory and field calibration were pointed out, according to the needs of control of the exact conditions. Sampling rate calculating equations and all the factors affecting the Rs value - temperature, flow, pH, salinity of the donor phase and biofouling - were discussed. Moreover, various calibration parameters gathered from the literature published in the last 16 years, including the device types, were tabled and compared. What is evident is that the sampling rate values for pharmaceuticals are impacted by several factors, whose influence is still unclear and unpredictable, while there is a big gap in experimental data. It appears that the calibration procedure needs to be improved, for example, there is a significant deficiency of PRCs (Performance Reference Compounds) for pharmaceuticals. One of the suggestions is to introduce correction factors for Rs values estimated in laboratory conditions.
Low-sampling-rate M-ary multiple access UWB communications in multipath channels
Alkhodary, Mohammad T.
2015-08-31
The desirable characteristics of ultra-wideband (UWB) technology are challenged by formidable sampling frequency, performance degradation in the presence of multi-user interference, and complexity of the receiver due to the channel estimation process. In this paper, a low-rate-sampling technique is used to implement M-ary multiple access UWB communications, in both the detection and channel estimation stages. A novel approach is used for multiple-access-interference (MAI) cancelation for the purpose of channel estimation. Results show reasonable performance of the proposed receiver for different number of users operating many times below Nyquist rate.
Huamin Li
2015-01-01
Full Text Available To study the effect of loading rate on mechanical properties and acoustic emission characteristics of coal samples, collected from Sanjiaohe Colliery, the uniaxial compression tests are carried out under various levels of loading rates, including 0.001 mm/s, 0.002 mm/s, and 0.005 mm/s, respectively, using AE-win E1.86 acoustic emission instrument and RMT-150C rock mechanics test system. The results indicate that the loading rate has a strong impact on peak stress and peak strain of coal samples, but the effect of loading rate on elasticity modulus of coal samples is relatively small. When the loading rate increases from 0.001 mm/s to 0.002 mm/s, the peak stress increases from 22.67 MPa to 24.99 MPa, the incremental percentage is 10.23%, and under the same condition the peak strain increases from 0.006191 to 0.007411 and the incremental percentage is 19.71%. Similarly, when the loading rate increases from 0.002 mm/s to 0.005 mm/s, the peak stress increases from 24.99 MPa to 28.01 MPa, the incremental percentage is 12.08%, the peak strain increases from 0.007411 to 0.008203, and the incremental percentage is 10.69%. The relationship between acoustic emission and loading rate presents a positive correlation, and the negative correlation relation has been determined between acoustic emission cumulative counts and loading rate during the rupture process of coal samples.
Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-24
Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.
RADIUM AND RADON EXHALATION RATE IN SOIL SAMPLES OF HASSAN DISTRICT OF SOUTH KARNATAKA, INDIA.
Jagadeesha, B G; Narayana, Y
2016-10-01
The radon exhalation rate was measured in 32 soil samples collected from Hassan district of South Karnataka. Radon exhalation rate of soil samples was measured using can technique. The results show variation of radon exhalation rate with radium content of the soil samples. A strong correlation was observed between effective radium content and radon exhalation rate. In the present work, an attempt was made to assess the levels of radon in the environment of Hassan. Radon activities were found to vary from 2.25±0.55 to 270.85±19.16 Bq m(-3) and effective radium contents vary from 12.06±2.98 to 1449.56±102.58 mBq kg(-1) Surface exhalation rates of radon vary from 1.55±0.47 to 186.43±18.57 mBq m(-2) h(-1), and mass exhalation rates of radon vary from 0.312±0.07 to 37.46±2.65 mBq kg(-1) h(-1). © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
AN EMPIRICAL ANALYSIS OF SAMPLING INTERVAL FOR EXCHANGE RATE FORECASTING WITH NEURAL NETWORKS
HUANG Wei; K. K. Lai; Y. Nakamori; WANG Shouyang
2003-01-01
Artificial neural networks (ANNs) have been widely used as a promising alternative approach for forecast task because of their several distinguishing features. In this paper, we investigate the effect of different sampling intervals on predictive performance of ANNs in forecasting exchange rate time series. It is shown that selection of an appropriate sampling interval would permit the neural network to model adequately the financial time series. Too short or too long a sampling interval does not provide good forecasting accuracy. In addition, we discuss the effect of forecasting horizons and input nodes on the prediction performance of neural networks.
Capelli, Laura; Sironi, Selena; Del Rosso, Renato
2013-01-15
Sampling is one of the main issues pertaining to odor characterization and measurement. The aim of sampling is to obtain representative information on the typical characteristics of an odor source by means of the collection of a suitable volume fraction of the effluent. The most important information about an emission source for odor impact assessment is the so-called Odor Emission Rate (OER), which represents the quantity of odor emitted per unit of time, and is expressed in odor units per second (ou∙s-1). This paper reviews the different odor sampling strategies adopted depending on source type. The review includes an overview of odor sampling regulations and a detailed discussion of the equipment to be used as well as the mathematical considerations to be applied to obtain the OER in relation to the sampled source typology.
McCarthy, K.
2008-01-01
Semipermeable membrane devices (SPMDs) were deployed in the Columbia Slough, near Portland, Oregon, on three separate occasions to measure the spatial and seasonal distribution of dissolved polycyclic aromatic hydrocarbons (PAHs) and organochlorine compounds (OCs) in the slough. Concentrations of PAHs and OCs in SPMDs showed spatial and seasonal differences among sites and indicated that unusually high flows in the spring of 2006 diluted the concentrations of many of the target contaminants. However, the same PAHs - pyrene, fluoranthene, and the alkylated homologues of phenanthrene, anthracene, and fluorene - and OCs - polychlorinated biphenyls, pentachloroanisole, chlorpyrifos, dieldrin, and the metabolites of dichlorodiphenyltrichloroethane (DDT) - predominated throughout the system during all three deployment periods. The data suggest that storm washoff may be a predominant source of PAHs in the slough but that OCs are ubiquitous, entering the slough by a variety of pathways. Comparison of SPMDs deployed on the stream bed with SPMDs deployed in the overlying water column suggests that even for the very hydrophobic compounds investigated, bed sediments may not be a predominant source in this system. Perdeuterated phenanthrene (phenanthrene-d10). spiked at a rate of 2 ??g per SPMD, was shown to be a reliable performance reference compound (PRC) under the conditions of these deployments. Post-deployment concentrations of the PRC revealed differences in sampling conditions among sites and between seasons, but indicate that for SPMDs deployed throughout the main slough channel, differences in sampling rates were small enough to make site-to-site comparisons of SPMD concentrations straightforward. ?? Springer Science+Business Media B.V. 2007.
Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc
2015-09-01
This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series.
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun
2015-10-01
In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey's post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey's post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p = 0.000). Based on the results of
Rusina, T.P.; Smedes, F.; Koblizkova, M.; Klanova, J.
2010-01-01
Sampling rates (R-s) for silicone rubber (SR) passive samplers were measured under two different hydrodynamic conditions. Concentrations were maintained in the aqueous phase by continuous equilibration with SR sheets of a large total surface area which had been spiked with polycyclic aromatic hydroc
Rehling, M; Rabøl, A
1989-01-01
After an intravenous injection of a tracer that is removed from the body solely by filtration in the kidneys, the glomerular filtration rate (GFR) can be determined from its plasma clearance. The method requires a great number of blood samples but collection of urine is not needed. In the present...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Studying {sup 222}Rn exhalation rate from soil and sand samples using CR-39 detector
Shafi-ur-Rehman [Pakistan Institute of Engineering and Applied Sciences (PIEAS), P.O. Nilore, Islamabad (Pakistan); Matiullah [Pakistan Institute of Engineering and Applied Sciences (PIEAS), P.O. Nilore, Islamabad (Pakistan)]. E-mail: matiullah@pieas.edu.pk; Shakeel-ur-Rehman [Pakistan Institute of Engineering and Applied Sciences (PIEAS), P.O. Nilore, Islamabad (Pakistan); Rahman, Said [Pakistan Institute of Engineering and Applied Sciences (PIEAS), P.O. Nilore, Islamabad (Pakistan)
2006-07-15
Accurate knowledge of exhalation rate plays an important role in characterization of the radon source strength in building materials and soil. It is a useful quantity to compare the relative importance of different materials and soil types. Majority of houses in Pakistan are mainly constructed from soil and sand. Therefore, studies concerning the determination of radon exhalation rate from these materials were carried out using CR-39 based NRPB radon dosimeters. In this context, samples were collected from different towns of the Bahawalpur Division, Punjab and major cities of NWFP. After treatment, samples were placed in plastic containers and dosimeters were installed in it at heights of 25cm above the surface of the samples. These containers were hermetically sealed and stored for three weeks to attain equilibrium between {sup 222}Rn and {sup 226}Ra. After exposure to radon, CR-39 detectors were etched in 25% NaOH at 80 deg. C for 16h. From the measured radon concentration values, {sup 222}Rn exhalation rates were determined. It ranged from 1.56 to 3.33Bqm{sup -2}h{sup -1} for the soil collected from the Bahawalpur Division and 2.49-4.66Bqm{sup -2}h{sup -1} for NWFP. The {sup 222}Rn exhalation rates from the sand samples were found to range from 2.78 to 20.8Bqm{sup -2}h{sup -1} for the Bahawalpur Division and from 0.99 to 4.2Bqm{sup -2}h{sup -1} for NWFP. {sup 226}Ra contents were also determined in the above samples which ranged from 28 to 36.5Bqkg{sup -1} in the soil samples collected from the Bahawalpur Division and from 40.9 to 51.9Bqkg{sup -1} in the samples collected from NWFP. In sand samples, {sup 226}Ra contents ranged from 49.2 to 215Bqkg{sup -1} and 22.6-27Bqkg{sup -1} in the samples collected from the Bahawalpur Division and NWFP, respectively. {sup 226}Ra contents in these samples were also determined using HPGe detector. The results of both the techniques are in good agreement within experimental errors.
System Identification of a Non-Uniformly Sampled Multi-Rate System in Aluminium Electrolysis Cells
Håkon Viumdal
2014-07-01
Full Text Available Standard system identification algorithms are usually designed to generate mathematical models with equidistant sampling instants, that are equal for both input variables and output variables. Unfortunately, real industrial data sets are often disrupted by missing samples, variations of sampling rates in the different variables (also known as multi-rate systems, and intermittent measurements. In industries with varying events based maintenance or manual operational measures, intermittent measurements are performed leading to uneven sampling rates. Such is the case with aluminium smelters, where in addition the materials fed into the cell create even more irregularity in sampling. Both measurements and feeding are mostly manually controlled. A simplified simulation of the metal level in an aluminium electrolysis cell is performed based on mass balance considerations. System identification methods based on Prediction Error Methods (PEM such as Ordinary Least Squares (OLS, and the sub-space method combined Deterministic and Stochastic system identification and Realization (DSR, and its variants are applied to the model of a single electrolysis cell as found in the aluminium smelters. Aliasing phenomena due to large sampling intervals can be crucial in avoiding unsuitable models, but with knowledge about the system dynamics, it is easier to optimize the sampling performance, and hence achieve successful models. The results based on the simulation studies of molten aluminium height in the cells using the various algorithms give results which tally well with the synthetic data sets used. System identification on a smaller data set from a real plant is also implemented in this work. Finally, some concrete suggestions are made for using these models in the smelters.
Blok, Chris; Jackson, Brian E; Guo, Xianfeng; de Visser, Pieter H B; Marcelis, Leo F M
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of water and nutrients in water surpass those in substrate, even though the transport of oxygen may be more complex. Transport rates can only limit growth when they are below a rate corresponding to maximum plant uptake. Our first objective was to compare Chrysanthemum growth performance for three water-based growing systems with different irrigation. We compared; multi-point irrigation into a pond (DeepFlow); one-point irrigation resulting in a thin film of running water (NutrientFlow) and multi-point irrigation as droplets through air (Aeroponic). Second objective was to compare press pots as propagation medium with nutrient solution as propagation medium. The comparison included DeepFlow water-rooted cuttings with either the stem 1 cm into the nutrient solution or with the stem 1 cm above the nutrient solution. Measurements included fresh weight, dry weight, length, water supply, nutrient supply, and oxygen levels. To account for differences in radiation sum received, crop performance was evaluated with Radiation Use Efficiency (RUE) expressed as dry weight over sum of Photosynthetically Active Radiation. The reference, DeepFlow with substrate-based propagation, showed the highest RUE, even while the oxygen supply provided by irrigation was potentially growth limiting. DeepFlow with water-based propagation showed 15-17% lower RUEs than the reference. NutrientFlow showed 8% lower RUE than the reference, in combination with potentially limiting irrigation supply of nutrients and oxygen. Aeroponic showed RUE levels similar to the reference and Aeroponic had non-limiting irrigation supply of water, nutrients, and oxygen. Water-based propagation affected the subsequent
Radon exhalation rate from the soil, sand and brick samples collected from NWFP and FATA, Pakistan.
Rahman, Said; Mati, N; Matiullah; Ghauri, Badar
2007-01-01
In order to characterise the building materials as an indoor radon source, knowledge of the radon exhalation rate from these materials is very important. In this regard, soil, sand and brick samples were collected from different places of the North West Frontier Province (NWFP) and Federally Administered Tribal Areas (FATA), Pakistan. The samples were processed and placed in plastic containers. NRPB radon dosemeters were installed in it at heights of 25 cm above the surface of the samples and containers were then hermetically sealed. After 40-80 d of exposure to radon, CR-39 detectors were removed from the dosemeter holders and etched in 25% NaOH at 80 degrees C for 16 h. From the measured radon concentration values, (222)Rn exhalation rates were determined. Exhalation rate form soil, sand and brick samples was found to vary from 114 +/- 11 to 416 +/- 9 mBq m(-2) h(-1), 205 +/- 16 to 291 +/- 13 mBq m(-2) h(-1) and 245 +/- 12 to 365 +/- 11 mBq m(-2) h(-1), respectively.
Tabor, A; Vestergaard, C H F; Lidegaard, Ø
2009-01-01
OBJECTIVE: To assess the fetal loss rate following amniocentesis and chorionic villus sampling (CVS). METHODS: This was a national registry-based cohort study, including all singleton pregnant women who had an amniocentesis (n = 32 852) or CVS (n = 31 355) in Denmark between 1996 and 2006. Personal...... registration numbers of women having had an amniocentesis or a CVS were retrieved from the Danish Central Cytogenetic Registry, and cross-linked with the National Registry of Patients to determine the outcome of each pregnancy. Postprocedural fetal loss rate was defined as miscarriage or intrauterine demise...... before 24 weeks of gestation. RESULTS: The miscarriage rates were 1.4% (95% CI, 1.3-1.5) after amniocentesis and 1.9% (95% CI, 1.7-2.0) after CVS. The postprocedural loss rate for both procedures did not change during the 11-year study period, and was not correlated with maternal age. The number...
The impact of different sampling rates and calculation time intervals on ROTI values
Jacobsen Knut Stanley
2014-01-01
Full Text Available The ROTI (Rate of TEC index is a commonly used measure of ionospheric irregularities level. The algorithm to calculate ROTI is easily implemented, and is the same from paper to paper. However, the sample rate of the GNSS data used, and the time interval over which a value of ROTI is calculated, varies from paper to paper. When comparing ROTI values from different studies, this must be taken into account. This paper aims to show what these differences are, to increase the awareness of this issue. We have investigated the effect of different parameters for the calculation of ROTI values, using one year of data from 8 receivers at latitudes ranging from 59° N to 79° N. We have found that the ROTI values calculated using different parameter choices are strongly positively correlated. However, the ROTI values are quite different. The effect of a lower sample rate is to lower the ROTI value, due to the loss of high-frequency parts of the ROT spectrum, while the effect of a longer calculation time interval is to remove or reduce short-lived peaks due to the inherent smoothing effect. The ratio of ROTI values based on data of different sampling rate is examined in relation to the ROT power spectrum. Of relevance to statistical studies, we find that the median level of ROTI depends strongly on sample rate, strongly on latitude at auroral latitudes, and weakly on time interval. Thus, a baseline “quiet” or “noisy” level for one location or choice or parameters may not be valid for another location or choice of parameters.
Walker, Anthony P.; Quaife, Tristan; Van Bodegom, Peter M.; De Kauwe, Martin G.; Keenan, Trevor F.; Joiner, Joanna; Lomas, Mark R.; MacBean, Natasha; Xu, Chongang; Yang, Xiaojuan;
2017-01-01
The maximum photosynthetic carboxylation rate (V (sub cmax)) is an influential plant trait that has multiple scaling hypotheses, which is a source of uncertainty in predictive understanding of global gross primary production (GPP). Four trait-scaling hypotheses (plant functional type, nutrient limitation, environmental filtering, and plant plasticity) with nine specific implementations were used to predict global V(sub cmax) distributions and their impact on global GPP in the Sheffield Dynamic Global Vegetation Model (SDGVM). Global GPP varied from 108.1 to 128.2 petagrams of Carbon (PgC) per year, 65 percent of the range of a recent model intercomparison of global GPP. The variation in GPP propagated through to a 27percent coefficient of variation in net biome productivity (NBP). All hypotheses produced global GPP that was highly correlated (r equals 0.85-0.91) with three proxies of global GPP. Plant functional type-based nutrient limitation, underpinned by a core SDGVM hypothesis that plant nitrogen (N) status is inversely related to increasing costs of N acquisition with increasing soil carbon, adequately reproduced global GPP distributions. Further improvement could be achieved with accurate representation of water sensitivity and agriculture in SDGVM. Mismatch between environmental filtering (the most data-driven hypothesis) and GPP suggested that greater effort is needed understand V(sub cmax) variation in the field, particularly in northern latitudes.
Modal dispersion, pulse broadening and maximum transmission rate in GRIN optical fibers encompass a central dip in the core index profile
El-Diasty, Fouad; El-Hennawi, H. A.; El-Ghandoor, H.; Soliman, Mona A.
2013-12-01
Intermodal and intramodal dispersions signify one of the problems in graded-index multi-mode optical fibers (GRIN) used for LAN communication systems and for sensing applications. A central index dip (depression) in the profile of core refractive-index may occur due to the CVD fabrication processes. The index dip may also be intentionally designed to broaden the fundamental mode field profile toward a plateau-like distribution, which have advantages for fiber-source connections, fiber amplifiers and self-imaging applications. Effect of core central index dip on the propagation parameters of GRIN fiber, such as intermodal dispersion, intramodal dispersion and root-mean-square broadening, is investigated. The conventional methods usually study optical signal propagation in optical fiber in terms of mode characteristics and the number of modes, but in this work multiple-beam Fizeau interferometry is proposed as an inductive but alternative methodology to afford a radial approach to determine dispersion, pulse broadening and maximum transmission rate in GRIN optical fiber having a central index dip.
Su, Yu-min; Makinia, Jacek; Pagilla, Krishna R
2008-04-01
The autotrophic maximum specific growth rate constant, muA,max, is the critical parameter for design and performance of nitrifying activated sludge systems. In literature reviews (i.e., Henze et al., 1987; Metcalf and Eddy, 1991), a wide range of muA,max values have been reported (0.25 to 3.0 days(-1)); however, recent data from several wastewater treatment plants across North America revealed that the estimated muA,max values remained in the narrow range 0.85 to 1.05 days(-1). In this study, long-term operation of a laboratory-scale sequencing batch reactor system was investigated for estimating this coefficient according to the low food-to-microorganism ratio bioassay and simulation methods, as recommended in the Water Environment Research Foundation (Alexandria, Virginia) report (Melcer et al., 2003). The estimated muA,max values using steady-state model calculations for four operating periods ranged from 0.83 to 0.99 day(-1). The International Water Association (London, United Kingdom) Activated Sludge Model No. 1 (ASM1) dynamic model simulations revealed that a single value of muA,max (1.2 days(-1)) could be used, despite variations in the measured specific nitrification rates. However, the average muA,max was gradually decreasing during the activated sludge chlorination tests, until it reached the value of 0.48 day(-1) at the dose of 5 mg chlorine/(g mixed liquor suspended solids x d). Significant discrepancies between the predicted XA/YA ratios were observed. In some cases, the ASM1 predictions were approximately two times higher than the steady-state model predictions. This implies that estimating this ratio from a complex activated sludge model and using it in simple steady-state model calculations should be accepted with great caution and requires further investigation.
Polyphase Filter Banks for Embedded Sample Rate Changes in Digital Radio Front-Ends
Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter
2011-01-01
This paper presents efficient processing engines for software-defined radio (SDR) front-ends. These engines, based on a polyphase channelizer, perform arbitrary sample-rate changes, frequency selection, and bandwidth control. This paper presents an M-path polyphase filter bank based on a modified N......-path polyphase filter. Such a system allows resampling by arbitrary ratios while performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. This resampling technique is based on sliding cyclic data load interacting with cyclic-shifted coefficients....... A non-maximally-decimated polyphase filter bank (where the number of data loads is not equal to the number of M subfilters) processes M subfilters in a time period that is less than or greater than the M data loads. A polyphase filter bank with five different resampling modes is used as a case study...
无
2001-01-01
Partly linear regression model is useful in practice, but littleis investigated in the literature to adapt it to the real data which are dependent and conditionally heteroscedastic. In this paper, the estimators of the regression components are constructed via local polynomial fitting and the large sample properties are explored. Under certain mild regularities, the conditions are obtained to ensure that the estimators of the nonparametric component and its derivatives are consistent up to the convergence rates which are optimal in the i.i.d. case, and the estimator of the parametric component is root-n consistent with the same rate as for parametric model. The technique adopted in the proof differs from that used and corrects the errors in the reference by Hamilton and Truong under i.i.d. samples.
False-Negative Rate and Recovery Efficiency Performance of a Validated Sponge Wipe Sampling Method
Krauter, Paula A.; Piepel, Greg F.; Boucher, Raymond; Tezak, Matt; Amidan, Brett G.; Einfeld, Wayne
2012-01-01
Recovery of spores from environmental surfaces varies due to sampling and analysis methods, spore size and characteristics, surface materials, and environmental conditions. Tests were performed to evaluate a new, validated sponge wipe method using Bacillus atrophaeus spores. Testing evaluated the effects of spore concentration and surface material on recovery efficiency (RE), false-negative rate (FNR), limit of detection (LOD), and their uncertainties. Ceramic tile and stainless steel had the...
Rehling, M; Rabøl, A
1989-01-01
After an intravenous injection of a tracer that is removed from the body solely by filtration in the kidneys, the glomerular filtration rate (GFR) can be determined from its plasma clearance. The method requires a great number of blood samples but collection of urine is not needed. In the present......-acetate) was determined simultaneously. Using these clearance values as reference the accuracy of six simplified methods were studied: five single-sample methods and one five-sample method. The standard error of estimate (SEE) of the single-sample methods ranged from 4.2 to 7.5 ml min-1 using EDTA, and from 3.8 to 6.3 ml...... min-1 using DTPA. SEE of the five-samples method was 3.0 ml min-1 (EDTA) and 3.1 ml min-1 (DTPA). The single-sample methods given by Christensen & Groth (1986) and by Tauxe (1986) are recommended for daily use, as SEE was small even at low GFR values. In patients with GFR less than 80 ml min-1...
Elevated time-dependent strengthening rates observed in San Andreas Fault drilling samples
Ikari, Matt J.; Carpenter, Brett M.; Vogt, Christoph; Kopf, Achim J.
2016-09-01
The central San Andreas Fault in California is known as a creeping fault, however recent studies have shown that it may be accumulating a slip deficit and thus its seismogenic potential should be seriously considered. We conducted laboratory friction experiments measuring time-dependent frictional strengthening (healing) on fault zone and wall rock samples recovered during drilling at the San Andreas Fault Observatory at Depth (SAFOD), located near the southern edge of the creeping section and in the direct vicinity of three repeating microearthquake clusters. We find that for hold times of up to 3000 s, frictional healing follows a log-linear dependence on hold time and that the healing rate is very low for a sample of the actively shearing fault core, consistent with previous results. However, considering longer hold times up to ∼350,000 s, the healing rate accelerates such that the data for all samples are better described by a power law relation. In general, samples having a higher content of phyllosilicate minerals exhibit low log-linear healing rates, and the notably clay-rich fault zone sample also exhibits strong power-law healing when longer hold times are included. Our data suggest that weak faults, such as the creeping section of the San Andreas Fault, can accumulate interseismic shear stress more rapidly than expected from previous friction data. Using the power-law dependence of frictional healing on hold time, calculations of recurrence interval and stress drop based on our data accurately match observations of discrete creep events and repeating Mw = 2 earthquakes on the San Andreas Fault.
Measurement of radon exhalation rate in various building materials and soil samples
Bala, Pankaj; Kumar, Vinod; Mehra, Rohit
2017-03-01
Indoor radon is considered as one of the potential dangerous radioactive elements. Common building materials and soil are the major source of this radon gas in the indoor environment. In the present study, the measurement of radon exhalation rate in the soil and building material samples of Una and Hamirpur districts of Himachal Pradesh has been done with solid state alpha track detectors, LR-115 type-II plastic track detectors. The radon exhalation rate for the soil samples varies from 39.1 to 91.2 mBq kg-1 h-1 with a mean value 59.7 mBq kg-1 h-1. Also the radium concentration of the studied area is found and it varies from 30.6 to 51.9 Bq kg-1 with a mean value 41.6 Bq kg-1. The exhalation rate for the building material samples varies from 40.72 (sandstone) to 81.40 mBq kg-1 h-1 (granite) with a mean value of 59.94 mBq kg-1 h-1.
Measurement of radon exhalation rate in various building materials and soil samples
Pankaj Bala; Vinod Kumar; Rohit Mehra
2017-03-01
Indoor radon is considered as one of the potential dangerous radioactive elements. Common building materials and soil are the major source of this radon gas in the indoor environment. In the present study, the measurement of radon exhalation rate in the soil and building material samples of Una and Hamirpurdistricts of Himachal Pradesh has been done with solid state alpha track detectors, LR-115 type-II plastic track detectors. The radon exhalation rate for the soil samples varies from 39.1 to 91.2 mBq kg⁻¹ h⁻¹with a mean value 59.7 mBq kg⁻¹ h⁻¹. Also the radium concentration of the studied area is found and it varies from 30.6 to 51.9 Bq kg⁻¹ with a mean value 41.6 Bq kg⁻¹ . The exhalation rate for the building material samples varies from 40.72 (sandstone) to 81.40 mBq kg⁻¹ h⁻¹ (granite) with a mean value of59.94 mBq kg⁻¹ h⁻¹.
Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.
Habershon, Scott
2016-04-12
In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.
Bellan, Steve E; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M
2013-04-01
Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data.
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Abdelhamid, M.; Fortes, F. J.; Fernández-Bravo, A.; Harith, M. A.; Laserna, J. J.
2013-11-01
Optical catapulting (OC) is a sampling and manipulation method that has been extensively studied in applications ranging from single cells in heterogeneous tissue samples to analysis of explosive residues in human fingerprints. Specifically, analysis of the catapulted material by means of laser-induced breakdown spectroscopy (LIBS) offers a promising approach for the inspection of solid particulate matter. In this work, we focus our attention in the experimental parameters to be optimized for a proper aerosol generation while increasing the particle density in the focal region sampled by LIBS. For this purpose we use shadowgraphy visualization as a diagnostic tool. Shadowgraphic images were acquired for studying the evolution and dynamics of solid aerosols produced by OC. Aluminum silicate particles (0.2-8 μm) were ejected from the substrate using a Q-switched Nd:YAG laser at 1064 nm, while time-resolved images recorded the propagation of the generated aerosol. For LIBS analysis and shadowgraphy visualization, a Q-switched Nd:YAG laser at 1064 nm and 532 nm was employed, respectively. Several parameters such as the time delay between pulses and the effect of laser fluence on the aerosol production have been also investigated. After optimization, the particle density in the sampling focal volume increases while improving the aerosol sampling rate till ca. 90%.
Exploring the Legionella pneumophila positivity rate in hotel water samples from Antalya, Turkey.
Sepin Özen, Nevgün; Tuğlu Ataman, Şenay; Emek, Mestan
2017-03-29
The genus Legionella is a fastidious Gram-negative bacteria widely distributed in natural waters and man made water supply systems. Legionella pneumophila is the aetiological agent of approximately 90% of reported Legionellosis cases, and serogroup 1 is the most frequent cause of infections. Legionnaires' disease is often associated with travel and continues to be a public health concern at present. The correct water management quality practices and rapid methods for analyzing Legionella species in environmental water is a key point for the prevention of Legionnaires' disease outbreaks. This study aimed to evaluate the positivity rates and serotyping of Legionella species from water samples in the region of Antalya, Turkey, which is an important tourism center. During January-December 2010, a total of 1403 samples of water that were collected from various hotels (n = 56) located in Antalya were investigated for Legionella pneumophila. All samples were screened for L. pneumophila by culture method according to "ISO 11731-2" criteria. The culture positive Legionella strains were serologically identified by latex agglutination test. A total of 142 Legionella pneumophila isolates were recovered from 21 (37.5%) of 56 hotels. The total frequency of L. pneumophila isolation from water samples was found as 10.1%. Serological typing of 142 Legionella isolates by latex agglutination test revealed that strains belonging to L. pneumophila serogroups 2-14 predominated in the examined samples (85%), while strains of L. pneumophila serogroup 1 were less numerous (15%). According to our knowledge, our study with the greatest number of water samples from Turkey demonstrates that L. pneumophila serogroups 2-14 is the most common isolate. Rapid isolation of L. pneumophila from environmental water samples is essential for the investigation of travel related outbreaks and the possible resources. Further studies are needed to have epidemiological data and to determine the types of L
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate
Davide Brunelli
2015-07-01
Full Text Available Compressive sensing (CS is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs. In this work, we extensively investigate the effectiveness of compressive sensing (CS when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.
Radon exhalation rate from soil samples of South Kumaun Lesser Himalayas, India
Prasad, Yogesh; Prasad, Ganesh; Gusain, G.S. [Department of Physics, H.N.B. Garhwal University, Badshahi Thaul Campus, Tehri Garhwal 249 199 (India); Choubey, V.M. [Wadia Institute of Himalayan Geology, Dehradun 248 001 (India); Ramola, R.C. [Department of Physics, H.N.B. Garhwal University, Badshahi Thaul Campus, Tehri Garhwal 249 199 (India)], E-mail: rcramola@gmail.com
2008-08-15
Ionizing radiation exposure experienced by the general population is mainly due to the indoor radon. Major part of radon comes from the top layer of the earth. The radon emanation is associated with radon in soil and sleepy back radium in the soil. Both field and laboratory measurements were carried out for the instantaneous and integrated radon concentration in soil-gas. The radon exhalation rate from collected soil samples was measured using LR-115 Type II plastic track detector. The soil-gas radon concentration was measured with the help of radon Emanometry method. The effective radium content of the soil samples was also calculated. The correlation coefficient between radium contents in collected soil samples and soil-gas radon from the same locations was calculated as 0.1, while it is 0.2 between radon exhalation rate and soil-gas radon concentration. The results show weak positive correlation due to the geological disturbance in the equilibrium conditions and high mobility of radon in the same geological medium.
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate.
Brunelli, Davide; Caione, Carlo
2015-07-10
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.
O. M. Bouzid
2012-01-01
Full Text Available High sampling frequencies in acoustic wireless sensor network (AWSN are required to achieve precise sound localisation. But they are also mean analysis time and memory intensive (i.e., huge data to be processed and more memory space to be occupied which form a burden on the nodes limited resources. Decreasing sampling rates below Nyquist criterion in acoustic source localisation (ASL applications requires development of the existing time delay estimation techniques in order to overcome the challenge of low time resolution. This work proposes using envelope and wavelet transform to enhance the resolution of the received signals through the combination of different time-frequency contents. Enhanced signals are processed using cross-correlation in conjunction with a parabolic fit interpolation to calculate the time delay accurately. Experimental results show that using this technique, estimation accuracy was improved by almost a factor of 5 in the case of using 4.8 kHz sampling rate. Such a conclusion is useful for developing precise ASL without the need of any excessive sensor resources, particularly for structural health monitoring applications.
Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates
Zhenyu Mei
2014-10-01
Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.
Fast sweep-rate plastic Faraday force magnetometer with simultaneous sample temperature measurement.
Slobinsky, D; Borzi, R A; Mackenzie, A P; Grigera, S A
2012-12-01
We present a design for a magnetometer capable of operating at temperatures down to 50 mK and magnetic fields up to 15 T with integrated sample temperature measurement. Our design is based on the concept of a Faraday force magnetometer with a load-sensing variable capacitor. A plastic body allows for fast sweep rates and sample temperature measurement, and the possibility of regulating the initial capacitance simplifies the initial bridge balancing. Under moderate gradient fields of ~1 T/m our prototype performed with a resolution better than 1 × 10(-5) emu. The magnetometer can be operated either in a dc mode, or in an oscillatory mode which allows the determination of the magnetic susceptibility. We present measurements on Dy(2)Ti(2)O(7) and Sr(3)Ru(2)O(7) as an example of its performance.
What is an Appropriate Temporal Sampling Rate to Record Floating Car Data with a GPS?
Peter Ranacher
2016-01-01
Full Text Available Floating car data (FCD recorded with the Global Positioning System (GPS are an important data source for traffic research. However, FCD are subject to error, which can relate either to the accuracy of the recordings (measurement error or to the temporal rate at which the data are sampled (interpolation error. Both errors affect movement parameters derived from the FCD, such as speed or direction, and consequently influence conclusions drawn about the movement. In this paper we combined recent findings about the autocorrelation of GPS measurement error and well-established findings from random walk theory to analyse a set of real-world FCD. First, we showed that the measurement error in the FCD was affected by positive autocorrelation. We explained why this is a quality measure of the data. Second, we evaluated four metrics to assess the influence of interpolation error. We found that interpolation error strongly affects the correct interpretation of the car’s dynamics (speed, direction, whereas its impact on the path (travelled distance, spatial location was moderate. Based on these results we gave recommendations for recording of FCD using the GPS. Our recommendations only concern time-based sampling, change-based, location-based or event-based sampling are not discussed. The sampling approach minimizes the effects of error on movement parameters while avoiding the collection of redundant information. This is crucial for obtaining reliable results from FCD.
Christian Damgaard
2011-12-01
Full Text Available Increasingly, the survival rates in experimental ecology are presented using odds ratios or log response ratios, but the use of ratio metrics has a problem when all the individuals have either died or survived in only one replicate. In the empirical ecological literature, the problem often has been ignored or circumvented by different, more or less ad hoc approaches. Here, it is argued that the best summary statistic for communicating ecological results of frequency data in studies with small unbalanced samples may be the mean of the posterior distribution of the survival rate. The developed approach may be particularly useful when effect size indexes, such as odds ratios, are needed to compare frequency data between treatments, sites or studies.
The rate of Supernovae from the combined sample of five searches
Cappellaro, E; Tsvetkov, D Y; Bartunov, O S; Pollas, C; Evans, R; Hamuy, M
1996-01-01
With the purpose to obtain new estimates of the rate of supernovae we joined the logs of five SN searches, namely the Asiago, Crimea, Cal{á}n-Tololo and OCA photographic surveys and the visual search by Evans (the sample counts 110 SNe). We found that the most prolific galaxies are late spirals in which most SNe are of type II (0.88 SNu). SN Ib/c are rarer than SN Ia (0.16 and 0.24 SNu, respectively), ruling out previous claims of a very high rate of SNIb/c. We also found that the rate of SN Ia in ellipticals (0.13 SNu) is smaller than in spirals, supporting the hypothesis of different ages of the progenitor systems in early and late type galaxies. Finally, we estimated that even assuming that separate classes of faint SN Ia and SN II do exist (SNe 1991bg and 1987A could be the respective prototypes) the overall SN rate is raised only by 20-30%, therefore excluding that faint SNe represent the majority of SN explosions. Also, the bright SNIIn are intrinsically very rare (2 to 5% of all SNII in spirals).
VLSI Implementation of Fixed-Point Lattice Wave Digital Filters for Increased Sampling Rate
M. Agarwal
2016-12-01
Full Text Available Low complexity and high speed are the key requirements of the digital filters. These filters can be realized using allpass filters. In this paper, design and minimum multiplier implementation of a fixed point lattice wave digital filter (WDF based on three port parallel adaptor allpass structure is proposed. Here, the second-order allpass sections are implemented with three port parallel adaptor allpass structures. A design-level area optimization is done by converting constant multipliers into shifts and adds using canonical signed digit (CSD techniques. The proposed implementation reduces the latency of the critical loop by reducing the number of components (adders and multipliers. Three design examples are included to analyze the effectiveness of the proposed approach. These are implemented in verilog HDL language and mapped to a standard cell library in a 0.18 μm CMOS process. The functionality of the implementations have been verified by applying number of different input vectors. Results and simulations demonstrate that the proposed design method leads to an efficient lattice WDF in terms of maximum sampling frequency. The cost to pay is small area overhead. The postlayout simulations have been done by HSPICE with CMOS transistors.
False-Negative Rate and Recovery Efficiency Performance of a Validated Sponge Wipe Sampling Method
Krauter, Paula; Piepel, Gregory F.; Boucher, Raymond; Tezak, Matthew S.; Amidan, Brett G.; Einfeld, Wayne
2012-02-01
Recovery of spores from environmental surfaces varies due to sampling and analysis methods, spore size and characteristics, surface materials, and environmental conditions. Tests were performed to evaluate a new, validated sponge wipe method using Bacillus atrophaeus spores. Testing evaluated the effects of spore concentration and surface material on recovery efficiency (RE), false-negative rate (FNR), limit of detection (LOD), and their uncertainties. Ceramic tile and stainless steel had the highest mean RE values (48.9 and 48.1%, respectively). Faux leather, vinyl tile, and painted wood had mean RE values of 30.3, 25.6, and 25.5, respectively, while plastic had the lowest mean RE (9.8%). Results show roughly linear dependences of RE and FNR on surface roughness, with smoother surfaces resulting in higher mean REs and lower FNRs. REs were not influenced by the low spore concentrations tested (3.10x10^-3 to 1.86 CFU/cm^2). Stainless steel had the lowest mean FNR (0.123), and plastic had the highest mean FNR (0.479). The LOD90 (>1 CFU detected 90% of the time) varied with surface material, from 0.015 CFU/cm^2 on stainless steel up to 0.039 on plastic. It may be possible to improve sampling results by considering surface roughness in selecting sampling locations and interpreting spore recovery data. Further, FNR values (calculated as a function of concentration and surface material) can be used presampling to calculate the numbers of samples for statistical sampling plans with desired performance and postsampling to calculate the confidence in characterization and clearance decisions.
False-negative rate and recovery efficiency performance of a validated sponge wipe sampling method.
Krauter, Paula A; Piepel, Greg F; Boucher, Raymond; Tezak, Matt; Amidan, Brett G; Einfeld, Wayne
2012-02-01
Recovery of spores from environmental surfaces varies due to sampling and analysis methods, spore size and characteristics, surface materials, and environmental conditions. Tests were performed to evaluate a new, validated sponge wipe method using Bacillus atrophaeus spores. Testing evaluated the effects of spore concentration and surface material on recovery efficiency (RE), false-negative rate (FNR), limit of detection (LOD), and their uncertainties. Ceramic tile and stainless steel had the highest mean RE values (48.9 and 48.1%, respectively). Faux leather, vinyl tile, and painted wood had mean RE values of 30.3, 25.6, and 25.5, respectively, while plastic had the lowest mean RE (9.8%). Results show roughly linear dependences of RE and FNR on surface roughness, with smoother surfaces resulting in higher mean REs and lower FNRs. REs were not influenced by the low spore concentrations tested (3.10 × 10(-3) to 1.86 CFU/cm(2)). Stainless steel had the lowest mean FNR (0.123), and plastic had the highest mean FNR (0.479). The LOD(90) (≥1 CFU detected 90% of the time) varied with surface material, from 0.015 CFU/cm(2) on stainless steel up to 0.039 on plastic. It may be possible to improve sampling results by considering surface roughness in selecting sampling locations and interpreting spore recovery data. Further, FNR values (calculated as a function of concentration and surface material) can be used presampling to calculate the numbers of samples for statistical sampling plans with desired performance and postsampling to calculate the confidence in characterization and clearance decisions.
Transition path sampling with quantum/classical mechanics for reaction rates.
Gräter, Frauke; Li, Wenjin
2015-01-01
Predicting rates of biochemical reactions through molecular simulations poses a particular challenge for two reasons. First, the process involves bond formation and/or cleavage and thus requires a quantum mechanical (QM) treatment of the reaction center, which can be combined with a more efficient molecular mechanical (MM) description for the remainder of the system, resulting in a QM/MM approach. Second, reaction time scales are typically many orders of magnitude larger than the (sub-)nanosecond scale accessible by QM/MM simulations. Transition path sampling (TPS) allows to efficiently sample the space of dynamic trajectories from the reactant to the product state without an additional biasing potential. We outline here the application of TPS and QM/MM to calculate rates for biochemical reactions, by means of a simple toy system. In a step-by-step protocol, we specifically refer to our implementation within the MD suite Gromacs, which we have made available to the research community, and include practical advice on the choice of parameters.
Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J
2006-03-30
The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems.
Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli
2017-09-12
Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.
Accretion rate of extraterrestrial {sup 41}Ca in Antarctic snow samples
Gómez-Guzmán, J.M., E-mail: jose.gomez@ph.tum.de [Technische Universität München, Fakultät für Physik, James-Franck-Strasse 1, 85748 Garching (Germany); Bishop, S.; Faestermann, T.; Famulok, N.; Fimiani, L.; Hain, K.; Jahn, S.; Korschinek, G.; Ludwig, P. [Technische Universität München, Fakultät für Physik, James-Franck-Strasse 1, 85748 Garching (Germany); Rodrigues, D. [Laboratorio TANDAR, Comisión Nacional de Energía Atómica (Argentina)
2015-10-15
Interplanetary Dust Particles (IDPs) are small grains, generally less than a few hundred micrometers in size. Their main source is the Asteroid Belt, located at 3 AU from the Sun, between Mars and Jupiter. During their flight from the Asteroid Belt to the Earth they are irradiated by galactic and solar cosmic rays (GCR and SCR), thus radionuclides are formed, like {sup 41}Ca and {sup 53}Mn. Therefore, {sup 41}Ca (T{sub 1/2} = 1.03 × 10{sup 5} yr) can be used as a key tracer to determine the accretion rate of IDPs onto the Earth because there are no significant terrestrial sources for this radionuclide. The first step of this study consisted to calculate the production rate of {sup 41}Ca in IDPs accreted by the Earth during their travel from the Asteroid Belt. This production rate, used in accordance with the {sup 41}Ca/{sup 40}Ca ratios that will be measured in snow samples from the Antarctica will be used to calculate the amount of extraterrestrial material accreted by the Earth per year. There challenges for this project are, at first, the much longer time for the flight needed by the IDPs to travel from the Asteroid Belt to the Earth in comparison with the {sup 41}Ca half-life yields an early saturation for the {sup 41}Ca/{sup 40}Ca ratio, and second, the importance of selecting the correct sampling site to avoid a high influx of natural {sup 40}Ca, preventing dilution of the {sup 41}Ca/{sup 40}Ca ratio, the quantity measured by AMS.
Resolved gas kinematics in a sample of low-redshift high star-formation rate galaxie
Varidel, Matthew; Croom, Scott; Owers, Matt; Sadler, Elaine
2016-01-01
We have used integral field spectroscopy of a sample of six nearby (z~0.01-0.04) high star-formation rate (SFR~10-40 solar masses per year) galaxies to investigate the relationship between local velocity dispersion and star formation rate on sub-galactic scales. The low redshift mitigates, to some extent, the effect of beam smearing which artificially inflates the measured dispersion as it combines regions with different line-of-sight velocities into a single spatial pixel. We compare the parametric maps of the velocity dispersion with the Halpha flux (a proxy for local star-formation rate), and the velocity gradient (a proxy for the local effect of beam smearing). We find, even for these very nearby galaxies, the Halpha velocity dispersion correlates more strongly with velocity gradient than with Halpha flux - implying that beam smearing is still having a significant effect on the velocity dispersion measurements. We obtain a first-order non parametric correction for the unweighted and flux weighted mean vel...
Design of a current Mode Sample and Hold Circuit at sampling rate of 150 MS/s
Prity Yadav
2014-10-01
Full Text Available A current mode sample and hold circuit is presented in this paper at 180nm technology. The major concerns of VLSI are area, power, delay and speed. Hence, we have used a MOSFET in triode region in the proposed architecture for voltage to current conversion instead of a resistor being used in previously proposed circuit. The proposed circuit achieves high sampling frequency and with more accuracy than the previous one. The performance of the proposed circuit is depicted in the form of simulation results.
Gudavalli, Maruti Ram; DeVocht, James; Tayh, Ali; Xia, Ting
2013-01-01
Objective Quantification of chiropractic high-velocity, low-amplitude spinal manipulation (HVLA-SM) may require biomechanical equipment capable of sampling data at high rates. However, there are few studies reported in the literature regarding the minimal sampling rate required to record the HVLA-SM force-time profile data accurately and precisely. The purpose of this study was to investigate the effect of different sampling rates on the quantification of forces, durations, and rates of loading of simulated side posture lumbar spine HVLA-SM delivered by doctors of chiropractic. Methods Five doctors of chiropractic (DCs) and 5 asymptomatic participants were recruited for this study. Force-time profiles were recorded during (i) 52 simulated HVLA-SM thrusts to a force transducer placed on a force plate by 2 DCs and (ii) 12 lumbar side posture HVLA-SM on 5 participants by 3 DCs. Data sampling rate of the force plate remained the same at 1000 Hz, whereas the sampling rate of the force transducer varied at 50, 100, 200, and 500 Hz. The data were reduced using custom-written MATLAB (Mathworks, Inc, Natick, MA) and MathCad (version 15; Parametric Technologies, Natick, MA) programs and analyzed descriptively. Results The average differences in the computed durations and rates of loading are smaller than 5% between 50 and 1000 Hz sampling rates. The differences in the computed preloads and peak loads are smaller than 3%. Conclusions The small differences observed in the characteristics of force-time profiles of simulated manual HVLA-SM thrusts measured using various sampling rates suggest that a sampling rate as low as 50 to 100 Hz may be sufficient. The results are applicable to the manipulation performed in this study: manual side posture lumbar spine HVLA-SM. PMID:23790603
Gudavalli, Maruti Ram; DeVocht, James; Tayh, Ali; Xia, Ting
2013-06-01
Quantification of chiropractic high-velocity, low-amplitude spinal manipulation (HVLA-SM) may require biomechanical equipment capable of sampling data at high rates. However, there are few studies reported in the literature regarding the minimal sampling rate required to record the HVLA-SM force-time profile data accurately and precisely. The purpose of this study was to investigate the effect of different sampling rates on the quantification of forces, durations, and rates of loading of simulated side posture lumbar spine HVLA-SM delivered by doctors of chiropractic. Five doctors of chiropractic (DCs) and 5 asymptomatic participants were recruited for this study. Force-time profiles were recorded during (i) 52 simulated HVLA-SM thrusts to a force transducer placed on a force plate by 2 DCs and (ii) 12 lumbar side posture HVLA-SM on 5 participants by 3 DCs. Data sampling rate of the force plate remained the same at 1000 Hz, whereas the sampling rate of the force transducer varied at 50, 100, 200, and 500 Hz. The data were reduced using custom-written MATLAB (Mathworks, Inc, Natick, MA) and MathCad (version 15; Parametric Technologies, Natick, MA) programs and analyzed descriptively. The average differences in the computed durations and rates of loading are smaller than 5% between 50 and 1000 Hz sampling rates. The differences in the computed preloads and peak loads are smaller than 3%. The small differences observed in the characteristics of force-time profiles of simulated manual HVLA-SM thrusts measured using various sampling rates suggest that a sampling rate as low as 50 to 100 Hz may be sufficient. The results are applicable to the manipulation performed in this study: manual side posture lumbar spine HVLA-SM. Copyright © 2013 The Authors. Published by Mosby, Inc. All rights reserved.
Ziebart, Christina; Giangregorio, Lora M; Gibbs, Jenna C; Levine, Iris C; Tung, James; Laing, Andrew C
2017-06-14
A wide variety of accelerometer systems, with differing sensor characteristics, are used to detect impact loading during physical activities. The study examined the effects of system characteristics on measured peak impact loading during a variety of activities by comparing outputs from three separate accelerometer systems, and by assessing the influence of simulated reductions in operating range and sampling rate. Twelve healthy young adults performed seven tasks (vertical jump, box drop, heel drop, and bilateral single leg and lateral jumps) while simultaneously wearing three tri-axial accelerometers including a criterion standard laboratory-grade unit (Endevco 7267A) and two systems primarily used for activity-monitoring (ActiGraph GT3X+, GCDC X6-2mini). Peak acceleration (gmax) was compared across accelerometers, and errors resulting from down-sampling (from 640 to 100Hz) and range-limiting (to ±6g) the criterion standard output were characterized. The Actigraph activity-monitoring accelerometer underestimated gmax by an average of 30.2%; underestimation by the X6-2mini was not significant. Underestimation error was greater for tasks with greater impact magnitudes. gmax was underestimated when the criterion standard signal was down-sampled (by an average of 11%), range limited (by 11%), and by combined down-sampling and range-limiting (by 18%). These effects explained 89% of the variance in gmax error for the Actigraph system. This study illustrates that both the type and intensity of activity should be considered when selecting an accelerometer for characterizing impact events. In addition, caution may be warranted when comparing impact magnitudes from studies that use different accelerometers, and when comparing accelerometer outputs to osteogenic impact thresholds proposed in literature. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Nalini Gupta
2016-01-01
Full Text Available Background: Developed countries adopted liquid-based cytology (LBC cervical cytology, partly because of its lower proportions of unsatisfactory (U/S/inadequate samples. This study was carried out to evaluate effect on the rate of U/S samples after introduction of LBC in our laboratory. Materials and Methods: An audit of U/S cervical samples was performed, which included split samples (n = 1000, only conventional Pap smear (CPS smears (n = 1000, and only LBC samples (n = 1000. The smears were reviewed by two observers independently, and adequacy for the samples was assessed as per The Bethesda System 2001. The reasons for U/S rate in split samples were categorized into various cytologic and/or technical reasons. Results: U/S rate was far less in only LBC samples (1.2% as compared to only CPS (10.5% cases. Cases in the satisfactory but limited category were also less in only LBC (0.4% as compared to only CPS (3.2% samples. The main reasons for U/S smears in split samples were low cell count (37.2% in CPS; 58.8% in LBC. The second main reason was low cellularity with excess blood and only excess blood in CPS samples. Conclusion: There was a significant reduction of U/S rate in LBC samples as compared to CPS samples, and the difference was statistically significant. The main cause of U/S samples in LBC was low cellularity indicating a technical fault in sample collection. The main cause of U/S rate in CPS was low cellularity followed by low cellularity with excess blood. Adequate training of sample takers and cytologists for the precise cell count to determine adequacy in smears can be of great help in reducing U/S rate.
Gupta, Nalini; Bhar, Vikrant S.; Rajwanshi, Arvind; Suri, Vanita
2016-01-01
Background: Developed countries adopted liquid-based cytology (LBC) cervical cytology, partly because of its lower proportions of unsatisfactory (U/S)/inadequate samples. This study was carried out to evaluate effect on the rate of U/S samples after introduction of LBC in our laboratory. Materials and Methods: An audit of U/S cervical samples was performed, which included split samples (n = 1000), only conventional Pap smear (CPS) smears (n = 1000), and only LBC samples (n = 1000). The smears were reviewed by two observers independently, and adequacy for the samples was assessed as per The Bethesda System 2001. The reasons for U/S rate in split samples were categorized into various cytologic and/or technical reasons. Results: U/S rate was far less in only LBC samples (1.2%) as compared to only CPS (10.5%) cases. Cases in the satisfactory but limited category were also less in only LBC (0.4%) as compared to only CPS (3.2%) samples. The main reasons for U/S smears in split samples were low cell count (37.2% in CPS; 58.8% in LBC). The second main reason was low cellularity with excess blood and only excess blood in CPS samples. Conclusion: There was a significant reduction of U/S rate in LBC samples as compared to CPS samples, and the difference was statistically significant. The main cause of U/S samples in LBC was low cellularity indicating a technical fault in sample collection. The main cause of U/S rate in CPS was low cellularity followed by low cellularity with excess blood. Adequate training of sample takers and cytologists for the precise cell count to determine adequacy in smears can be of great help in reducing U/S rate. PMID:27382408
Woodruff, S P; Johnson, T R; Waits, L P
2015-07-01
Knowledge of population demographics is important for species management but can be challenging in low-density, wide-ranging species. Population monitoring of the endangered Sonoran pronghorn (Antilocapra americana sonoriensis) is critical for assessing the success of recovery efforts, and noninvasive DNA sampling (NDS) could be more cost-effective and less intrusive than traditional methods. We evaluated faecal pellet deposition rates and faecal DNA degradation rates to maximize sampling efficiency for DNA-based mark-recapture analyses. Deposition data were collected at five watering holes using sampling intervals of 1-7 days and averaged one pellet pile per pronghorn per day. To evaluate nuclear DNA (nDNA) degradation, 20 faecal samples were exposed to local environmental conditions and sampled at eight time points from one to 124 days. Average amplification success rates for six nDNA microsatellite loci were 81% for samples on day one, 63% by day seven, 2% by day 14 and 0% by day 60. We evaluated the efficiency of different sampling intervals (1-10 days) by estimating the number of successful samples, success rate of individual identification and laboratory costs per successful sample. Cost per successful sample increased and success and efficiency declined as the sampling interval increased. Results indicate NDS of faecal pellets is a feasible method for individual identification, population estimation and demographic monitoring of Sonoran pronghorn. We recommend collecting samples sampling interval of four to seven days in summer conditions (i.e., extreme heat and exposure to UV light) will achieve desired sample sizes for mark-recapture analysis while also maximizing efficiency [Corrected]. © 2014 John Wiley & Sons Ltd.
The Study of Suicidal Behaviors Rates in the Community Sample of Karaj City in 2005
S.K. Malakouti
2008-04-01
Full Text Available Introduction & Objective: Although many studies have been conducted in Iran, but because of the importance of suicide problem in the mental health programs, it is still necessary to do some epidemiologic study. This study has addressed to suicidal behaviors rates in a community sample of Karaj city.Materials & Methods: Karaj with 1,000,000 population was selected as the environment of study. Our subjects (no=2300 were 15 years and older that were selected by randomized sampling. SUPRE-MISS questionnaire was employed in this survey.Results: 65% of the subjects were female, 57.2% of them were married , and most of them had high school level of education (48%. Housewife women were the most common category among occupational groups (43.3%. According to the results the prevalence of positive history for suicide behaviors (idea, plan, attempt were 12.7%, 6.2% & 3.3% respectively, and for current year were 5.7%, 2.9% & 1%.Conclusion: Suicidal behaviors in Iran, in spite of suicide leading to death, have similar prevalence to western countries.
[The ICD-10 Symptom Rating (ISR): validation of the depression scale in a clinical sample].
Brandt, Wolfram Alexis; Loew, Thomas; von Heymann, Friedrich; Stadtmüller, Godehard; Georgi, Alexander; Tischinger, Michael; Strom, Frederik; Mutschler, Friederike; Tritt, Karin
2015-06-01
The ICD-10 Symptom Rating (ISR) 1 measures the severity of psychiatric disorders with 29 items on 5 subscales as comprehensively as possible. The following syndromes are measured: Depressive syndrome, anxiety syndrome, obsessive-compulsive syndrome, Somatoform syndrome, eating disorder syndrome as well as additional items that cover various mental syndromes, and an overall score. The study reports findings on the validity and sensitivity to change of the depression subscale (ISR-D). In a clinical sample of N=949 inpatients with depression spectrum disorders the convergent validity was determined by correlation with the Beck Depression Inventory (BDI) 3 and the subscale "depression" of the Symptom-Checklist-90-R (SCL-90-R) 4. The high correlation between the different instruments confirms the validity of the ISR-Depression Scale. The sensitivity to change of the ISR seems higher than that of the BDI and the SCL-90. Because of its economy and the good psychometric properties the ISR is recommended for use in clinical samples.
Braga, Márcio F.; Morais, Cecília F.; Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2014-11-01
This paper investigates the problems of uncertain sampling rate discretisation and the networked control of uncertain time-invariant continuous-time linear systems in polytopic domains. The sampling period is assumed to be unknown but belonging to a given interval. To avoid the difficulty of dealing with the exponential of uncertain matrices, a discrete-time model is obtained by applying a Taylor series expansion of degree ℓ to the original system. The resulting discrete-time model is composed of homogeneous polynomial matrices with parameters lying in the Cartesian product of simplexes, called a multi-simplex, plus an additive norm-bounded term representing the discretisation residual error. The original continuous-time system is controlled through a communication network that introduces a time delay in the process. Linear matrix inequality relaxations that include a scalar parameter search are proposed for the design of a digital robust state feedback controller that guarantees the closed-loop stability of the networked control system. Numerical experiments are presented to illustrate the versatility of the proposed method, which can be applied to a more general class of networked control problems than the existing approaches in the literature.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
The Ultraviolet and Infrared Star Formation Rates of Compact Group Galaxies: An Expanded Sample
Lenkic, Laura; Gallagher, Sarah; Desjardins, Tyler; Walker, Lisa May; Johnson, Kelsey; Fedotov, Konstantin; Charlton, Jane; Hornschemeier, Ann; Durrell, Pat; Gronwall, Caryl
2016-01-01
Compact groups of galaxies provide insight into the role of low-mass, dense environments in galaxy evolution because the low velocity dispersions and close proximity of galaxy members result in frequent interactions that take place over extended timescales. We expand the census of star formation in compact group galaxies by \\citet{tzanavaris10} and collaborators with Swift UVOT, Spitzer IRAC and MIPS 24 \\micron\\ photometry of a sample of 183 galaxies in 46 compact groups. After correcting luminosities for the contribution from old stellar populations, we estimate the dust-unobscured star formation rate (SFR$_{\\mathrm{UV}}$) using the UVOT uvw2photometry. Similarly, we use the MIPS 24 \\micron\\ photometry to estimate the component of the SFR that is obscured by dust (SFR$_{\\mathrm{IR}}$). We find that galaxies which are MIR-active (MIR-"red"), also have bluer UV colours, higher specific star formation rates, and tend to lie in H~{\\sc i}-rich groups, while galaxies that are MIR-inactive (MIR-"blue") have redder ...
Receivers for Diffusion-Based Molecular Communication: Exploiting Memory and Sampling Rate
Mosayebi, Reza; Gohari, Amin; Kenari, Masoumeh Nasiri; Mitra, Urbashi
2014-01-01
In this paper, a diffusion-based molecular communication channel between two nano-machines is considered. The effect of the amount of memory on performance is characterized, and a simple memory-limited decoder is proposed and its performance is shown to be close to that of the best possible imaginable decoder (without any restriction on the computational complexity or its functional form), using Genie-aided upper bounds. This effect is specialized for the case of Molecular Concentration Shift Keying; it is shown that a four-bits memory achieved nearly the same performance as infinite memory. Then a general class of threshold decoders is considered and shown not to be optimal for Poisson channel with memory, unless SNR is higher than a value specified in the paper. Another contribution is to show that receiver sampling at a rate higher than the transmission rate, i.e., a multi-read system, can significantly improve the performance. The associated decision rule for this system is shown to be a weighted sum of t...
Resolved Gas Kinematics in a Sample of Low-Redshift High Star-Formation Rate Galaxies
Varidel, Mathew; Pracy, Michael; Croom, Scott; Owers, Matt S.; Sadler, Elaine
2016-03-01
We have used integral field spectroscopy of a sample of six nearby (z 0.01-0.04) high star-formation rate (SFR ˜ 10-40 M_⊙ yr^{-1}) galaxies to investigate the relationship between local velocity dispersion and star-formation rate on sub-galactic scales. The low-redshift mitigates, to some extent, the effect of beam smearing which artificially inflates the measured dispersion as it combines regions with different line-of-sight velocities into a single spatial pixel. We compare the parametric maps of the velocity dispersion with the Hα flux (a proxy for local star-formation rate), and the velocity gradient (a proxy for the local effect of beam smearing). We find, even for these very nearby galaxies, the Hα velocity dispersion correlates more strongly with velocity gradient than with Hα flux-implying that beam smearing is still having a significant effect on the velocity dispersion measurements. We obtain a first-order non parametric correction for the unweighted and flux weighted mean velocity dispersion by fitting a 2D linear regression model to the spaxel-by-spaxel data where the velocity gradient and the Hα flux are the independent variables and the velocity dispersion is the dependent variable; and then extrapolating to zero velocity gradient. The corrected velocity dispersions are a factor of 1.3-4.5 and 1.3-2.7 lower than the uncorrected flux-weighted and unweighted mean line-of-sight velocity dispersion values, respectively. These corrections are larger than has been previously cited using disc models of the velocity and velocity dispersion field to correct for beam smearing. The corrected flux-weighted velocity dispersion values are σ m 20-50 km s-1.
Blok, Chris; Jackson, Brian E.; Guo, Xianfeng; Visser, De Pieter H.B.; Marcelis, Leo F.M.
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of
[Use of C-arm CT for improving the hit rate for selective blood sampling from adrenal veins].
Georgiades, C; Kharlip, J; Valdeig, S; Wacker, F K; Hong, K
2009-09-01
Primary hyperaldosteronism is the most common curable cause of hypertension with a prevalence of up to 12% among patients with hypertension. Selective blood sampling from adrenal veins is considered the diagnostic gold standard. However, it is underutilized due to the high technical failure rate. The use of C-arm CT during the sampling procedure can reduce or even eliminate this failure rate. If adrenal vein sampling is augmented by native C-arm CT to check for the correct catheter position, the technical success rate increases substantially. General use of this technique will result in correct diagnosis and treatment for patients with primary hyperaldosteronism.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Sander, Pia; Mouritsen, L; Andersen, J Thorup
2002-01-01
OBJECTIVE: The aim of this study was to evaluate the value of routine measurements of urinary flow rate and residual urine volume as a part of a "minimal care" assessment programme for women with urinary incontinence in detecting clinical significant bladder emptying problems. MATERIAL AND METHOD...... female urinary incontinence. Thus, primary health care providers can assess women based on simple guidelines without expensive equipment for assessment of urine flow rate and residual urine....
The Ultraviolet and Infrared Star Formation Rates of Compact Group Galaxies: An Expanded Sample
Lenkic, Laura; Tzanavaris, Panayiotis; Gallagher, Sarah C.; Desjardins, Tyler D.; Walker, Lisa May; Johnson, Kelsey E.; Fedotov, Konstantin; Charlton, Jane; Cardiff, Ann H.; Durell, Pat R.
2016-01-01
Compact groups of galaxies provide insight into the role of low-mass, dense environments in galaxy evolution because the low velocity dispersions and close proximity of galaxy members result in frequent interactions that take place over extended time-scales. We expand the census of star formation in compact group galaxies by Tzanavaris et al. (2010) and collaborators with Swift UVOT, Spitzer IRAC and MIPS 24 m photometry of a sample of 183 galaxies in 46 compact groups. After correcting luminosities for the contribution from old stellar populations, we estimate the dust-unobscured star formation rate (SFRUV) using the UVOT uvw2 photometry. Similarly, we use the MIPS 24 m photometry to estimate the component of the SFR that is obscured by dust (SFRIR). We find that galaxies which are MIR-active (MIR-red), also have bluer UV colours, higher specific SFRs, and tend to lie in Hi-rich groups, while galaxies that are MIR-inactive (MIR-blue) have redder UV colours, lower specific SFRs, and tend to lie in Hi-poor groups. We find the SFRs to be continuously distributed with a peak at about 1 M yr1, indicating this might be the most common value in compact groups. In contrast, the specific SFR distribution is bimodal, and there is a clear distinction between star-forming and quiescent galaxies. Overall, our results suggest that the specific SFR is the best tracer of gas depletion and galaxy evolution in compact groups.
Radon exhalation rate for phosphate rocks samples using alpha track detectors
Hesham A. Yousef
2016-01-01
Full Text Available Solid state nuclear track detectors are used in very broad fields of technical applications and successfully applied in different areas of environmental physics and geophysics. Radon concentration and surface exhalation rate for phosphate samples from El-Sebaeya and Abu-Tartur, Egypt, were measured using nuclear tracks detectors from types CR-39 and LR-115. The average values of radon concentration are 12711.03 and 10925.02 Bqm−3 in El-Sebaeya area using CR-39 and LR-115 detectors, respectively. Also the average values of radon concentration are 15824.16 and13601.48 Bqm−3 in Abu-Tartur area using CR-39 and LR-115 detectors, respectively. From the obtained results we can conclude that the average values of radon concentration in Abu-Tartur are higher than El-Sebaeya. The present study is important to detect any harmful radiation which, can be used as reference information to assess any changes in the radioactive background level in the surrounding environment.
The binary fraction, separation distribution, and merger rate of white dwarfs from the SPY sample
Maoz, Dan
2016-01-01
From a sample of spectra of 439 white dwarfs (WDs) from the ESO-VLT Supernova-Ia Progenitor surveY (SPY), we measure the maximal changes in radial-velocity (DRVmax) between epochs (generally two epochs, separated by up to 470d), and model the observed DRVmax statistics via Monte-Carlo simulations, to constrain the population characteristics of double WDs (DWDs). The DWD fraction among WDs is fbin=0.103+/-0.017 (1-sigma, random) +/-0.015 (systematic), in the separation range ~<4AU within which the data are sensitive to binarity. Assuming the distribution of binary separation, a, is a power-law, dN/dt ~ a^alpha, at the end of the last common-envelope phase and the start of solely gravitational-wave-driven binary evolution, the constraint by the data is alpha=-1.4+/-0.4 (1-sigma). If these parameters extend to small separations, the implied Galactic WD merger rate per unit stellar mass is R_merge=1.4e-13 to 1.3e-11 /yr/Msun (2-sigma), with a likelihood-weighted mean of R_merge=(7.3+/-2.7)e-13 /yr/Msun (1-sigm...
Schiefelbein, Sarah; Fröhlich, Alexander; John, Gernot T; Beutler, Falco; Wittmann, Christoph; Becker, Judith
2013-08-01
Dissolved oxygen plays an essential role in aerobic cultivation especially due to its low solubility. Under unfavorable conditions of mixing and vessel geometry it can become limiting. This, however, is difficult to predict and thus the right choice for an optimal experimental set-up is challenging. To overcome this, we developed a method which allows a robust prediction of the dissolved oxygen concentration during aerobic growth. This integrates newly established mathematical correlations for the determination of the volumetric gas-liquid mass transfer coefficient (kLa) in disposable shake-flasks from the filling volume, the vessel size and the agitation speed. Tested for the industrial production organism Corynebacterium glutamicum, this enabled a reliable design of culture conditions and allowed to predict the maximum possible cell concentration without oxygen limitation.
Crowley, Stephanie J; Suh, Christina; Molina, Thomas A; Fogg, Louis F; Sharkey, Katherine M; Carskadon, Mary A
2016-04-01
Circadian rhythm sleep-wake disorders (CRSWDs) often manifest during the adolescent years. Measurement of circadian phase such as the dim light melatonin onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. A total of 66 healthy adolescents (26 males) aged 14.8-17.8 years participated in a study; they were required to sleep on a fixed baseline schedule for a week before which they visited the laboratory for saliva collection in dim light (samples taken every 30 min (13 samples) and the other from samples taken every 60 min (seven samples). Three standard thresholds (first three melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. An agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using Bland-Altman analysis; agreement between the sampling rate DLMOs was defined as ± 1 h. Within a 6-h sampling window, 60-min sampling provided DLMO estimates within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with CRSWDs. Copyright © 2016 Elsevier B.V. All rights reserved.
Strasser, Barbara; Schwarz, Joachim; Haber, Paul; Schobersberger, Wolfgang
2011-12-01
Aim of this study was to evaluate reliable guide values for heart rate (HF) and blood pressure (RR) with reference to defined sub maximum exertion considering age, gender and body mass. One hundred and eighteen healthy but non-trained subjects (38 women, 80 men) were included in the study. For interpretation, finally facts of 28 women and 59 men were used. We found gender differences for HF and RR. Further, we noted significant correlations between HF and age as well as between RR and body mass at all exercise levels. We established formulas for gender-specific calculation of reliable guide values for HF and RR on sub maximum exercise levels.
Rates and risks for prolonged grief disorder in a sample of orphaned and widowed genocide survivors
Jacob Nadja
2010-07-01
Full Text Available Abstract Background The concept of Prolonged Grief Disorder (PGD has been defined in recent years by Prigerson and co-workers, who have developed and empirically tested consensus and diagnostic criteria for PGD. Using these most recent criteria defining PGD, the aim of this study was to determine rates of and risks for PGD in survivors of the 1994 Rwandan genocide who had lost a parent and/or the husband before, during or after the 1994 events. Methods The PG-13 was administered to 206 orphans or half orphans and to 194 widows. A regression analysis was carried out to examine risk factors of PGD. Results 8.0% (n = 32 of the sample met criteria for PGD with an average of 12 years post-loss. All but one person had faced multiple losses and the majority indicated that their grief-related loss was due to violent death (70%. Grief was predicted mainly by time since the loss, by the violent nature of the loss, the severity of symptoms of posttraumatic stress disorder (PTSD and the importance given to religious/spiritual beliefs. By contrast, gender, age at the time of bereavement, bereavement status (widow versus orphan, the number of different types of losses reported and participation in the funeral ceremony did not impact the severity of prolonged grief reactions. Conclusions A significant portion of the interviewed sample continues to experience grief over interpersonal losses and unresolved grief may endure over time if not addressed by clinical intervention. Severity of grief reactions may be associated with a set of distinct risk factors. Subjects who lose someone through violent death seem to be at special risk as they have to deal with the loss experience as such and the traumatic aspects of the loss. Symptoms of PTSD may hinder the completion of the mourning process. Religious beliefs may facilitate the mourning process and help to find meaning in the loss. These aspects need to be considered in the treatment of PGD.
The ultraviolet and infrared star formation rates of compact group galaxies: an expanded sample
Lenkić, Laura; Tzanavaris, Panayiotis; Gallagher, Sarah C.; Desjardins, Tyler D.; Walker, Lisa May; Johnson, Kelsey E.; Fedotov, Konstantin; Charlton, Jane; Hornschemeier, Ann E.; Durrell, Pat R.; Gronwall, Caryl
2016-07-01
Compact groups of galaxies provide insight into the role of low-mass, dense environments in galaxy evolution because the low velocity dispersions and close proximity of galaxy members result in frequent interactions that take place over extended time-scales. We expand the census of star formation in compact group galaxies by Tzanavaris et al. (2010) and collaborators with Swift UVOT, Spitzer IRAC and MIPS 24 μm photometry of a sample of 183 galaxies in 46 compact groups. After correcting luminosities for the contribution from old stellar populations, we estimate the dust-unobscured star formation rate (SFRUV) using the UVOT uvw2 photometry. Similarly, we use the MIPS 24 μm photometry to estimate the component of the SFR that is obscured by dust (SFRIR). We find that galaxies which are MIR-active (MIR-`red'), also have bluer UV colours, higher specific SFRs, and tend to lie in H I-rich groups, while galaxies that are MIR-inactive (MIR-`blue') have redder UV colours, lower specific SFRs, and tend to lie in H I-poor groups. We find the SFRs to be continuously distributed with a peak at about 1 M⊙ yr-1, indicating this might be the most common value in compact groups. In contrast, the specific SFR distribution is bimodal, and there is a clear distinction between star-forming and quiescent galaxies. Overall, our results suggest that the specific SFR is the best tracer of gas depletion and galaxy evolution in compact groups.
Sarah Dee Geiger
2012-01-01
Full Text Available Reduced sleep has been found to be associated with increased risk of diabetes mellitus, hypertension, cardiovascular disease (CVD, and mortality. Self-rated health (SRH has been shown to be a predictor of CVD and mortality. However, study of the association between insufficient sleep and SRH is limited. We examined participants >18 years of age (n=377, 160 from a representative, cross-sectional survey (2008 BRFSS. Self-reported insufficient sleep in the previous 30 days was categorized into six groups. The outcome was poor SRH. We calculated odds ratios ((OR (95% confidence interval (CI of increasing categories of insufficient rest/sleep, taking zero days of insufficient sleep as the referent category. We found a positive association between increasing categories of insufficient sleep and poor SRH, independent of relevant covariates. In the multivariable-adjusted model, compared to 0 days insufficient sleep, the OR (95% CI of poor SRH was 1.03 (0.97–1.10 for 1–6 days, 1.45 (1.34–1.57 for 7–13 days, 2.12 (1.97–2.27 for 14–20 days, 2.32 (2.09–2.58 for 21–29 days, and and 2.71 (2.53–2.90 for 30 days of insufficient sleep in the prior 30 days (P-trend <0.0001. In a nationally representative sample, increasing categories of insufficient sleep were associated with poor SRH.
Henriksen, Ulrik Lütken; Kanstrup, Inge-Lis; Henriksen, Jens Henrik Sahl
2013-01-01
Abstract Background and aim. From a clinical point of view determination of glomerular filtration rate (clearance) is important. The aim of the present study was to compare the one-sample clearance to reference multiple-sample (51)Cr-EDTA clearance in consecutively referred children suspected of ...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Blazevich, Anthony J; Horne, Sara; Cannavan, Dale
2008-01-01
knee extension training was performed 3 x week(-1) for 10 weeks. Maximal isometric strength (+11.2%) and RFD (measured from 0-30/50/100/200 ms, respectively; +10.5%-20.5%) increased after 10 weeks (P training mode. Peak EMG amplitude and rate of EMG rise......This study examined the effects of slow-speed resistance training involving concentric (CON, n = 10) versus eccentric (ECC, n = 11) single-joint muscle contractions on contractile rate of force development (RFD) and neuromuscular activity (EMG), and its maintenance through detraining. Isokinetic...... were not significantly altered with training or detraining. Subjects with below-median normalized RFD (RFD/MVC) at 0 weeks significantly increased RFD after 5- and 10-weeks training, which was associated with increased neuromuscular activity. Subjects who maintained their higher RFD after detraining...
Thornley, John H M; Parsons, Anthony J
2014-02-07
Treating resource allocation within plants, and between plants and associated organisms, is essential for plant, crop and ecosystem modelling. However, it is still an unresolved issue. It is also important to consider quantitatively when it is efficient and to what extent a plant can invest profitably in a mycorrhizal association. A teleonomic model is used to address these issues. A six state-variable model giving exponential growth is constructed. This represents carbon (C), nitrogen (N) and phosphorus (P) substrates with structure in shoot, root and mycorrhiza. The shoot is responsible for uptake of substrate C, the root for substrates N and P, and the mycorrhiza also for substrates N and P. A teleonomic goal, maximizing proportional growth rate, is solved analytically for the allocation fractions. Expressions allocating new dry matter to shoot, root and mycorrhiza are derived which maximize growth rate. These demonstrate several key intuitive phenomena concerning resource sharing between plant components and associated mycorrhizae. For instance, if root uptake rate for phosphorus is equal to that achievable by mycorrhiza and without detriment to root uptake rate for nitrogen, then this gives a faster growing mycorrhizal-free plant. However, if root phosphorus uptake is below that achievable by mycorrhiza, then a mycorrhizal association may be a preferred strategy. The approach offers a methodology for introducing resource sharing between species into ecosystem models. Applying teleonomy may provide a valuable short-term means of modelling allocation, avoiding the circularity of empirical models, and circumventing the complexities and uncertainties inherent in mechanistic approaches. However it is subjective and brings certain irreducible difficulties with it.
Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.;
2015-01-01
With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...
Yue Kan
2015-06-01
Full Text Available Accurate acoustic source localization at a low sampling rate (less than 10 kHz is still a challenging problem for small portable systems, especially for a multitasking micro-embedded system. A modification of the generalized cross-correlation (GCC method with the up-sampling (US theory is proposed and defined as the US-GCC method, which can improve the accuracy of the time delay of arrival (TDOA and source location at a low sampling rate. In this work, through the US operation, an input signal with a certain sampling rate can be converted into another signal with a higher frequency. Furthermore, the optimal interpolation factor for the US operation is derived according to localization computation time and the standard deviation (SD of target location estimations. On the one hand, simulation results show that absolute errors of the source locations based on the US-GCC method with an interpolation factor of 15 are approximately from 1/15- to 1/12-times those based on the GCC method, when the initial same sampling rates of both methods are 8 kHz. On the other hand, a simple and small portable passive acoustic source localization platform composed of a five-element cross microphone array has been designed and set up in this paper. The experiments on the established platform, which accurately locates a three-dimensional (3D near-field target at a low sampling rate demonstrate that the proposed method is workable.
L. Ocola
2008-01-01
Full Text Available Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale I_{MS} as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the I_{MS} scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.
A bound for the convergence rate of parallel tempering for sampling restricted Boltzmann machines
Fischer, Asja; Igel, Christian
2015-01-01
Abstract Sampling from restricted Boltzmann machines (RBMs) is done by Markov chain Monte Carlo (MCMC) methods. The faster the convergence of the Markov chain, the more efficiently can high quality samples be obtained. This is also important for robust training of RBMs, which usually relies...
It is generally accepted that monitoring wells must be purged to access formation water to obtain “representative” ground water quality samples. Historically anywhere from 3 to 5 well casing volumes have been removed prior to sample collection to evacuate the standing well water...
Accounting for short samples and heterogeneous experience in rating crop insurance
Julia I. Borman; Barry K. Goodwin; Keith H. Cobel; Thomas O. Knight; Rod. Rejesus
2013-01-01
The purpose of this paper is to be an academic inquiry into rating issues confronted by the US Federal Crop Insurance program stemming from changes in participation rates as well as the weighting of data to reflect longer-run weather patterns.
Entropy rates of low-significance bits sampled from chaotic physical systems
Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.
2016-10-01
We examine the entropy of low-significance bits in analog-to-digital measurements of chaotic dynamical systems. We find the partition of measurement space corresponding to low-significance bits has a corrugated structure. Using simulated measurements of a map and experimental data from a circuit, we identify two consequences of this corrugated partition. First, entropy rates for sequences of low-significance bits more closely approach the metric entropy of the chaotic system, because the corrugated partition better approximates a generating partition. Second, accurate estimation of the entropy rate using low-significance bits requires long block lengths as the corrugated partition introduces more long-term correlation, and using only short block lengths overestimates the entropy rate. This second phenomenon may explain recent reports of experimental systems producing binary sequences that pass statistical tests of randomness at rates that may be significantly beyond the metric entropy rate of the physical source.
Giarratana, Filippo; Muscolino, Daniele; Beninati, Chiara; Ziino, Graziella; Giuffrida, Alessandro; Panebianco, Antonio
2016-11-21
R(+)limonene (LMN) is the major aromatic compound in essential oils obtained from oranges, grapefruits, and lemons. The improvement of preservation techniques to reduce the growth and activity of spoilage microorganisms in foods is crucial to increase their shelf life and to reduce the losses due to spoilage. The aim of this work is to evaluate the effect of LMN on the shelf life of fish fillets. Its effectiveness was preliminarily investigated in vitro against 60 strains of Specific Spoilage Organisms (SSOs) and then on gilt-head sea bream fillets stored at 2±0.5°C for 15days under vacuum. LMN showed a good inhibitory effect against tested SSOs strains. On gilt-head sea bream fillets, LMN inhibited the growth SSOs effectively, and its use resulted in a shelf-life extension of ca. 6-9days of treated fillets, compared to the control samples. The LMN addition in Sparus aurata fillets giving a distinctive smell and like-lemon taste to fish fillets that resulted pleasant to panellists. Its use contributed to a considerable reduction of fish spoilage given that the fillets treated with LMN were still sensory acceptable after 15days of storage. LMN may be used as an effective antimicrobial system to reduce the microbial growth and to improve the shelf life of fresh gilt-head sea bream fillets.
Kemmler, Wolfgang; Schliffka, Rebecca; Mayhew, Jerry L; von Stengel, Simon
2010-07-01
We evaluated the effect of whole-body electromyostimulation (WB-EMS) during dynamic exercises over 14 weeks on anthropometric, physiological, and muscular parameters in postmenopausal women. Thirty women (64.5 +/- 5.5 years) with experience in physical training (>3 years) were randomly assigned either to a control group (CON, n = 15) that maintained their general training program (2 x 60 min.wk of endurance and dynamic strength exercise) or to an electromyostimulation group (WB-EMS, n = 15) that additionally performed a 20-minute WB-EMS training (2 x 20 min.10 d). Resting metabolic rate (RMR) determined from spirometry was selected to indicate muscle mass. In addition, body circumferences, subcutaneous skinfolds, strength, power, and dropout and adherence values. Resting metabolic rate was maintained in WB-EMS (-0.1 +/- 4.8 kcal.h) and decreased in CON (-3.2+/-5.2 kcal.h, p = 0.038); although group differences were not significant (p = 0.095), there was a moderately strong effect size (ES = 0.62). Sum of skinfolds (28.6%) and waist circumference (22.3%) significantly decreased in WB-EMS whereas both parameters (1.4 and 0.1%, respectively) increased in CON (p = 0.001, ES = 1.37 and 1.64, respectively), whereas both parameters increased in CON (1.4 and 0.1%, respectively). Isometric strength changes of the trunk extensors and leg extensors differed significantly (p < or = 0.006) between WB-EMS and CON (9.9% vs. -6.4%, ES = 1.53; 9.6% vs. -4.5%, ES = 1.43, respectively). In summary, adjunct WB-EMS training significantly exceeds the effect of isolated endurance and resistance type exercise on fitness and fatness parameters. Further, we conclude that for elderly subjects unable or unwilling to perform dynamic strength exercises, electromyostimulation may be a smooth alternative to maintain lean body mass, strength, and power.
Eduardo Marcel Fernandes Nascimento
2011-08-01
Full Text Available The objective of this study was to analyze the heart rate (HR profile plotted against incremental workloads (IWL during a treadmill test using three mathematical models [linear, linear with 2 segments (Lin2, and sigmoidal], and to determine the best model for the identification of the HR threshold that could be used as a predictor of ventilatory thresholds (VT1 and VT2. Twenty-two men underwent a treadmill incremental test (retest group: n=12 at an initial speed of 5.5 km.h-1, with increments of 0.5 km.h-1 at 1-min intervals until exhaustion. HR and gas exchange were continuously measured and subsequently converted to 5-s and 20-s averages, respectively. The best model was chosen based on residual sum of squares and mean square error. The HR/IWL ratio was better fitted with the Lin2 model in the test and retest groups (p0.05. During a treadmill incremental test, the HR/IWL ratio seems to be better fitted with a Lin2 model, which permits to determine the HR threshold that coincides with VT1.
Analysis of parallel optical sampling rate and ADC requirements in digital coherent receivers
Lorences Riesgo, Abel; Galili, Michael; Peucheret, Christophe
2012-01-01
We comprehensively assess analog-to-digital converter requirements in coherent digital receiver schemes with parallel optical sampling. We determine the electronic requirements in accordance with the properties of the free running local oscillator....
Gomez-Paccard, Miriam; Osete, Maria Luisa; Chauvin, Annick; Pérez-Asensio, Manuel; Jimenez-Castillo, Pedro
2014-05-01
Available European data indicate that during the past 2500 years there have been periods of rapid intensity geomagnetic fluctuations interspersed with periods of little change. The challenge now is to precisely describe these rapid changes. Due to the difficulty to obtain precisely dated heated materials to obtain a high-resolution description of past geomagnetic field intensity changes, new high-quality archeomagnetic data from archeological heated materials founded in well-defined superposed stratigraphic units are particularly valuable. In this work we report the archeomagnetic study of several groups of ceramic fragments from southeastern Spain that belong to 14 superposed stratigraphic levels corresponding to a surface no bigger than 3 m by 7 m. Between four and eight ceramic fragments were selected per stratigraphic unit. The age of the pottery fragments range from the second half of the 7th to the11th centuries. The dates were established by three radiocarbon dates and by archeological/historical constraints including typological comparisons and well-controlled stratigraphic constrains.Between two and four specimens per pottery fragment were studied. The classical Thellier and Thellier method including pTRM checks and TRM anisotropy and cooling rate corrections was used to estimate paleointensities at specimen level. All accepted results correspond to well-defined single components of magnetization going toward the origin and to high-quality paleointensity determinations. From these experiments nine new high-quality mean intensities have been obtained. The new data provide an improved description of the sharp abrupt intensity changes that took place in this region between the 7th and the 11th centuries. The results confirm that several rapid intensity changes (of about ~15-20 µT/century) took place in Western Europe during the recent history of the Earth.
Jamil, K.; All, S.; Iqbal, M.; Qureshi, A.A.; Khan, H.A. [Pinstech, Islamabad (Pakistan). Radiation Physics Division
1998-11-01
This paper describes research that has been conducted to quantify the radionuclides present in the coal samples from various coal-mines in the Punjab and Balochistan provinces of Pakistan. A high-purity Ge-detector-based gamma-spectrometer was used. The maximum activity concentrations for Ra-226, Th-232 and K-40 were found to be 31.4 {+-} 3.0, 32.7 {+-} 3.2 and 21.4 {+-} 5.0 Bq kg{sup -1}, respectively. A theoretical model to compute external gamma-ray dose rate from a coal-mine surface was developed. The Monte Carlo simulation was employed to compute the required mass attenuation coefficients corresponding to the various gamma-ray energies from Ra-226, Th-232, their progeny and K-40 present in the coal samples. In addition, the effective thickness of coal slab for self-absorption was also computed using the Monte Carlo Neutron Photo (MCNP) transport code. The computed external gamma-ray dose rate has been found to be much below the dose rate limits for occupational persons as well as for the general population.
Assoumani, Azziz; Margoum, Christelle; Chataing, Sophie; Guillemain, Céline; Coquery, Marina
2014-03-14
Passive sampling represents a cost-effective approach and is more representative than grab sampling for the determination of contaminant concentrations in freshwaters. In this study, we performed the calibration of a promising tool, the passive stir bar sorptive extraction (SBSE), which has previously shown good performances for semi-quantitative monitoring of pesticides in a field study. We determined the sampling rates and lag-phases of 18 moderately hydrophobic to hydrophobic agricultural pesticides (2.18Sampling rates were between 1.3 and 121 mL d(-1) with satisfactory RSD for most pesticides (9-47%), and poor repeatability for 3 hydrophobic pesticides (59-83%). Lag-phases for all target pesticides were shorter than 2 h, demonstrating the efficiency of passive SBSE for the integration of transient concentration peaks of these contaminants in surface waters. The role of flow velocity on pesticide uptakes was investigated and we assumed a water boundary layer-controlled mass transfer for 5 pesticides with log Kow>3.3. Among these pesticides, we selected fenitrothion to evaluate its elimination, along with its deuterated analogue. Results showed 82% elimination of both compounds over the 7-day experiment and isotropic exchange for fenitrothion, making fenitrothion-d6 a promising PRC candidate for in situ applications. Copyright © 2014 Elsevier B.V. All rights reserved.
Montiel, Cecilia; Pena, Joaquin A.; Montiel-Barbero, Isabel; Polanczyk, Guilherme
2008-01-01
A total of 1,535 4-12 year-old children were screened with the Conners' rating scales, followed by diagnostic confirmation by the diagnostic interview schedule for children-IV-parent version. The prevalence of ADHD was estimated to be 10.03%, and only 3.9% of children had received medication for the treatment of ADHD symptoms. Prevalence rates and…
Brown, Samuel M; Tate, M Quinn; Jones, Jason P; Kuttler, Kathryn G; Lanspa, Michael J; Rondina, Matthew T; Grissom, Colin K; Mathews, V J
2015-10-01
To determine whether variability of coarsely sampled heart rate and blood pressure early in the course of severe sepsis and septic shock predicts successful resuscitation, defined as vasopressor independence at 24 hours after admission. In an observational study of patients admitted with severe sepsis or septic shock from 2009 to 2011 to either of 2 intensive care units (ICUs) at a tertiary-care hospital, in whom blood pressure was measured via an arterial catheter, we sampled heart rate and blood pressure every 30 seconds over the first 6 hours of ICU admission and calculated the coefficient of variability of those measurements. Primary outcome was vasopressor independence at 24 hours; and secondary outcome was 28-day mortality. We studied 165 patients, of which 97 (59%) achieved vasopressor independence at 24 hours. Overall, 28-day mortality was 15%. Significant predictors of vasopressor independence at 24 hours included the coefficient of variation of heart rate, age, Acute Physiology and Chronic Health Evaluation II, the number of increases in vasopressor dose, mean vasopressin dose, mean blood pressure, and time-pressure integral of mean blood pressure less than 60 mm Hg. Lower sampling frequencies (up to once every 5 minutes) did not affect the findings. Increased variability of coarsely sampled heart rate was associated with vasopressor independence at 24 hours after controlling for possible confounders. Sampling frequencies of once in 5 minutes may be similar to once in 30 seconds. © The Author(s) 2014.
Kline, Gregory A; So, Benny; Dias, Valerian C; Harvey, Adrian; Pasieka, Janice L
2013-07-01
"Successful" adrenal vein catheterization in primary aldosteronism (PA) is often defined by a ratio of >3:1 of cortisol in the adrenal vein vs the inferior vena cava. Non-use of corticotropin (ACTH) during sampling may increase the apparent failure rate of adrenal vein catheterization due to lower cortisol levels. A retrospective study was performed on all patients with confirmed unilateral PA between June 2005 and August 2011. Adrenal vein sampling (AVS) included simultaneous bilateral baseline samples with repeat sampling 15 minutes after intravenous infusion of 250 μg of Cortrosyn (ACTH-S). Successful catheter placement was judged as adrenal cortisol:IVC cortisol of >3:1, applied to both baseline and ACTH-S samples and lateralization of aldosteronism was judged as normalized aldosterone/cortisol (A/C) ratio >3 times the contralateral A/C ratio. In ACTH-S samples, 94% of right-sided catheterizations were biochemically successful with 100% success on the left. Among baseline samples, only 47% of right- and 44% of left-sided samples met the 3:1 cortisol criteria. However, 95% of apparent "failed" baseline cortisol sets still showed lateralization of A/C ratios that matched the ultimate pathology. Non-ACTH-stimulated samples may be incorrectly judged as failed catheter placement when a 3:1 ratio is used. ACTH-stimulated sampling is the preferred means to confirm catheterization during AVS.
Mahur, A K; Kumar, Rajesh; Mishra, Meena; Sengupta, D; Prasad, Rajendra
2008-03-01
Coal is a technologically important material used for power generation. Its cinder (fly ash) is used in the manufacturing of bricks, sheets, cement, land filling etc. Coal and its by-products often contain significant amounts of radionuclides, including uranium which is the ultimate source of the radioactive gas radon. Burning of coal and the subsequent atmospheric emission cause the redistribution of toxic radioactive trace elements in the environment. In the present study, radon exhalation rates in coal and fly ash samples from the thermal power plants at Kolaghat (W.B.) and Kasimpur (U.P.) have been measured using sealed Can technique having LR-115 type II detectors. The activity concentrations of 238U, 232Th, and 40K in the samples of Kolaghat power station are also measured. It is observed that the radon exhalation rate from fly ash samples from Kolaghat is higher than from coal samples and activity concentration of radionuclides in fly ash is enhanced after the combustion of coal. Fly ash samples from Kasimpur show no appreciable change in radon exhalation. Radiation doses from the fly ash samples have been estimated from radon exhalation rate and radionuclide concentrations.
SUN Wenbin
2016-11-01
Full Text Available In order to improve the accuracy of low frequency (sampling interval greater than 1 minute trajectory data matching algorithm, this paper proposed a novel matching algorithm termed HMDP-Q (History Markov Decision Processes Q-learning. The new algorithm is based on reinforced learning on historic trajectory. First, we extract historic trajectory data according to incremental matching algorithm as historical reference, and filter the trajectory dataset through the historic reference, the shortest trajectory and the reachability. Then we model the map matching process as the Markov decision process, and build up reward function using deflected distance between trajectory points and historic trajectories. The largest reward value of the Markov decision process was calculated by using the reinforced learning algorithm, which is the optimal matching result of trajectory and road. Finally we calibrate the algorithm by utilizing city's floating cars data to experiment. The results show that this algorithm can improve the accuracy between trajectory data and road. The matching accuracy is 89.2% within 1 minute low-frequency sampling interval, and the matching accuracy is 61.4% when the sampling frequency is 16 minutes. Compared with IVVM (Interactive Voting-based Map Matching, HMDP-Q has a higher matching accuracy and computing efficiency. Especially, when the sampling frequency is 16 minutes, HMDP-Q improves the matching accuracy by 26%.
A Meta-Analysis of Questionnaire Response Rates in Military Samples
2007-03-01
236. *Kennedy, J. F. (2000). The influence of outsourcing on job satisfaction and turnover intentions of air force civil engineer company grade...Technology, Wright-Patterson AFB OH). http://handle.dtic.mil/100.2/ADA390850 Roth, P. L., & BeVier, C. A. (1998). Response rates in HRM /OB survey
Associations between supportive leadership and employees self-rated health in an occupational sample
Schmidt, B.; Loerbroks, A.; Herr, R.M.; Wilson, M.G.; Jarczok, M.N.; Litaker, D.; Mauss, D.; Bosch, J.A.; Fischer, J.E.
2014-01-01
Background: Protecting the health of the work force has become an important issue in public health research. Purpose: This study aims to explore potential associations between supportive leadership style (SLS), an aspect of leadership behavior, and self-rated health (SRH) among employees. Method: We
New comparison of psychological meaning of colors in samples and objects with semantic ratings
Lee, Tien-Rein
2002-06-01
In color preference and color-meaning research, color chips are widely used as stimuli. Are meanings of isolated color chips generalizeable to contextualized colors? According to Taft (1996), few significant differences exist between chip and object ratings for the same color. A similar survey was performed on 192 college students. This article reports the results of the study comparing semantic rating of color applied to a variety of familiar objects. The objects were a cup, T-shirt, sofa, car, notebook, and MP3 player, all images that represent daily life familiar objects. Subjects rated a set of 16 color chips, against 6 bipolar, 7-step semantic differential scales. The scales consisted of beautiful-ugly, soft-hard, warm-cool, elegant-vulgar, loud- discreet, and masculine-feminine. Analyses performed on the data indicated that unlike Taft's findings on 1996, significant differences existed between chip and object rating for the same color in every scale. The results of the study have implications for the use of color chips in color planning which suggest they are not compatible with the generality of results of the earlier color meaning research. Generally, a color judged to be beautiful, elegant and warm when presented as a chip does not equal beautiful, elegant, and warm when applied to the surface of an object such as a cup, T-shirt, sofa, car.
A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip
Timoc, C.; Tran, T.; Wongso, J.
1992-01-01
This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.
Cadamuro, Janne; von Meyer, Alexander; Wiedemann, Helmut; Klaus Felder, Thomas; Moser, Franziska; Kipman, Ulrike; Haschke-Becher, Elisabeth; Mrazek, Cornelia; Simundic, Ana-Maria
2016-12-01
Hemolytic samples are one of the most challenging preanalytical issues in laboratory medicine. Even causes leading to hemolytic specimen are various, including phlebotomy practices. Respective educational interventions as well as the reduction of the number of people involved in blood collections are claimed to influence the sample quality for the better. In our hospital 70 junior doctors were in charge of routine phlebotomy until 2012, when this task was shifted to 874 nurses, including a preceding training in phlebotomy and preanalytics. Our aim was to evaluate the impact of this training effect and the increase of people involved on sample quality. The hemolysis index (HI) of 43,875 samples was measured before (n=21,512) and after (n=22,363) the switch of blood collection responsibilities. Differences in overall hemolysis rates and the amount of plasma samples with a concentration of free hemoglobin (fHb) above 0.5 g/L and 1 g/L were calculated. Overall HI as well as the percentage of samples with an fHb concentration >0.5 g/L decreased after the responsibility for phlebotomy changed. The rate of samples with an fHb concentration >1 g/L remained unchanged. Hemolysis rates were reduced upon passing phlebotomy tasks from untrained physicians on to a trained nursing staff. We therefore conclude that the number of people performing phlebotomy seems to play a minor role, compared to the effect of a standardized training. However, whether a reduction in the number of people involved in blood collection could lead to further improvement of sample quality, remains to be investigated in future studies.
Bulimia and anorexia nervosa in winter depression: lifetime rates in a clinical sample.
Gruber, N P; Dilsaver, S C
1996-01-01
Symptoms of an eating disorder (hyperphagia, carbohydrate craving, and weight gain) are characteristic of wintertime depression. Recent findings suggest that the severity of bulimia nervosa peaks during fall and winter months, and that persons with this disorder respond to treatment with bright artificial light. However, the rates of eating disorders among patients presenting for the treatment of winter depression are unknown. This study was undertaken to determine these rates among 47 patients meeting the DSM-III-R criteria for major depression with a seasonal pattern. All were evaluated using standard clinical interviews and the Structured Clinical Interview for DSM-III-R. Twelve (25.5%) patients met the DSM-III-R criteria for an eating disorder. Eleven patients had onset of mood disorder during childhood or adolescence. The eating disorder followed the onset of the mood disorder. Clinicians should inquire about current and past symptoms of eating disorders when evaluating patients with winter depression. PMID:8580121
Sawyer, Jean; Chon, HeeCheong; Ambrose, Nicoline G.
2008-01-01
The purpose of the present study was (1) to determine whether speech rate, utterance length, and grammatical complexity (number of clauses and clausal constituents per utterance) influenced stuttering-like disfluencies as children became more disfluent at the end of a 1200-syllable speech sample [Sawyer, J., & Yairi, E. (2006). "The effect of…
Smidt, Dorte; Torpet, Lis Andersen; Nauntofte, Birgitte;
2010-01-01
Smidt D, Torpet LA, Nauntofte B, Heegaard KM, Pedersen AML. Associations between labial and whole salivary flow rates, systemic diseases and medications in a sample of older people. Community Dent Oral Epidemiol 2010; 38: 422-435. © 2010 John Wiley & Sons A/S Abstract - Objective: To investigate ...
Ying-Ying Wang
2015-06-01
Full Text Available The identification difficulties for a dual-rate Hammerstein system lie in two aspects. First, the identification model of the system contains the products of the parameters of the nonlinear block and the linear block, and a standard least squares method cannot be directly applied to the model; second, the traditional single-rate discrete-time Hammerstein model cannot be used as the identification model for the dual-rate sampled system. In order to solve these problems, by combining the polynomial transformation technique with the key variable separation technique, this paper converts the Hammerstein system into a dual-rate linear regression model about all parameters (linear-in-parameter model and proposes a recursive least squares algorithm to estimate the parameters of the dual-rate system. The simulation results verify the effectiveness of the proposed algorithm.
Mindell, Jennifer S; Giampaoli, Simona; Goesswald, Antje; Kamtsiuris, Panagiotis; Mann, Charlotte; Männistö, Satu; Morgan, Karen; Shelton, Nicola J; Verschuren, W M Monique; Tolonen, Hanna
2015-10-05
Health examination surveys (HESs), carried out in Europe since the 1950's, provide valuable information about the general population's health for health monitoring, policy making, and research. Survey participation rates, important for representativeness, have been falling. International comparisons are hampered by differing exclusion criteria and definitions for non-response. Information was collected about seven national HESs in Europe conducted in 2007-2012. These surveys can be classified into household and individual-based surveys, depending on the sampling frames used. Participation rates of randomly selected adult samples were calculated for four survey modules using standardised definitions and compared by sex, age-group, geographical areas within countries, and over time, where possible. All surveys covered residents not just citizens; three countries excluded those in institutions. In two surveys, physical examinations and blood sample collection were conducted at the participants' home; the others occurred at examination clinics. Recruitment processes varied considerably between surveys. Monetary incentives were used in four surveys. Initial participation rates aged 35-64 were 45% in the Netherlands (phase II), 54% in Germany (new and previous participants combined), 55% in Italy, and 65% in Finland. In Ireland, England and Scotland, household participation rates were 66%, 66% and 63% respectively. Participation rates were generally higher in women and increased with age. Almost all participants attending an examination centre agreed to all modules but surveys conducted in the participants' home had falling responses to each stage. Participation rates in most primate cities were substantially lower than the national average. Age-standardized response rates to blood pressure measurement among those aged 35-64 in Finland, Germany and England fell by 0.7-1.5 percentage points p.a. between 1998-2002 and 2010-2012. Longer trends in some countries show a more
Voisin, Thomas; Grapes, Michael D; Zhang, Yong; Lorenzo, Nicholas; Ligda, Jonathan; Schuster, Brian; Weihs, Timothy P
2016-12-05
To model mechanical properties of metals at high strain rates, it is important to visualize and understand their deformation at the nanoscale. Unlike post mortem Transmission Electron Microscopy (TEM), which allows one to analyze defects within samples before or after deformation, in situ TEM is a powerful tool that enables imaging and recording of deformation and the associated defect motion during mechanical loading. Unfortunately, all current in situ TEM mechanical testing techniques are limited to quasi-static strain rates. In this context, we are developing a new test technique that utilizes a rapid straining stage and the Dynamic TEM (DTEM) at the Lawrence Livermore National Laboratory (LLNL). The new straining stage can load samples in tension at strain rates as high as 4×10(3)/s using two piezoelectric actuators operating in bending while the DTEM at LLNL can image in movie mode with a time resolution as short as 70ns. Given the piezoelectric actuators are limited in force, speed, and displacement, we have developed a method for fabricating TEM samples with small cross-sectional areas to increase the applied stresses and short gage lengths to raise the applied strain rates and to limit the areas of deformation. In this paper, we present our effort to fabricate such samples from bulk materials. The new sample preparation procedure combines femtosecond laser machining and ion milling to obtain 300µm wide samples with control of both the size and location of the electron transparent area, as well as the gage cross-section and length.
[Base-rate estimates for negative response bias in a workers' compensation claim sample].
Merten, T; Krahi, G; Krahl, C; Freytag, H W
2010-09-01
Against the background of a growing interest in symptom validity assessment in European countries, new data on base rates of negative response bias is presented. A retrospective data analysis of forensic psychological evaluations was performed based on 398 patients with workers' compensation claims. 48 percent of all patients scored below cut-off in at least one symptom validity test (SVT) indicating possible negative response bias. However, different SVTs appear to have differing potential to identify negative response bias. The data point at the necessity to use modern methods to check data validity in civil forensic contexts.
Schmidt, Burkhard; Loerbroks, Adrian; Herr, Raphael M; Wilson, Mark G; Jarczok, Marc N; Litaker, David; Mauss, Daniel; Bosch, Jos A; Fischer, Joachim E
2014-01-01
Protecting the health of the work force has become an important issue in public health research. This study aims to explore potential associations between supportive leadership style (SLS), an aspect of leadership behavior, and self-rated health (SRH) among employees. We drew on cross-sectional data from a cohort of industrial workers (n = 3,331), collected in 2009. We assessed employees' ratings of supportive, employee-oriented leadership behavior at their job, their SRH, and work stress as measured by the effort-reward model and scales measuring demands, control, and social support. Logistic regression estimated odds ratios (ORs) and corresponding 95 % confidence intervals (CIs) for the association between the perception of poor SLS and poor SRH controlling for work-related stress and other confounders. Sensitivity analyses stratified models by sex, age, and managerial position to test the robustness of associations. Perception of poor SLS was associated with poor SRH [OR 2.39 (95 % CI 1.95-2.92)]. Although attenuated following adjustment for measures of work-related stress and other confounders [OR 1.60 (95 % CI 1.26-2.04)], the magnitude, direction, and significance of this association remained robust in stratified models in most subgroups. SLS appears to be relevant to health in the workplace. Leadership behavior may represent a promising area for future research with potential for promoting better health in a large segment of the adult population.
LU; Zudi
2001-01-01
［1］Engle, R. F., Granger, C. W. J., Rice, J. et al., Semiparametric estimates of the relation between weather and electricity sales, Journal of the American Statistical Association, 1986, 81: 310.［2］Heckman, N. E., Spline smoothing in partly linear models, Journal of the Royal Statistical Society, Ser. B, 1986, 48: 244.［3］Rice, J., Convergence rates for partially splined models, Statistics & Probability Letters, 1986, 4: 203.［4］Chen, H., Convergence rates for parametric components in a partly linear model, Annals of Statistics, 1988, 16: 136.［5］Robinson, P. M., Root-n-consistent semiparametric regression, Econometrica, 1988, 56: 931.［6］Speckman, P., Kernel smoothing in partial linear models, Journal of the Royal Statistical Society, Ser. B, 1988, 50: 413.［7］Cuzick, J., Semiparametric additive regression, Journal of the Royal Statistical Society, Ser. B, 1992, 54: 831.［8］Cuzick, J., Efficient estimates in semiparametric additive regression models with unknown error distribution, Annals of Statistics, 1992, 20: 1129.［9］Chen, H., Shiau, J. H., A two-stage spline smoothing method for partially linear models, Journal of Statistical Planning & Inference, 1991, 27: 187.［10］Chen, H., Shiau, J. H., Data-driven efficient estimators for a partially linear model, Annals of Statistics, 1994, 22: 211.［11］Schick, A., Root-n consistent estimation in partly linear regression models, Statistics & Probability Letters, 1996, 28: 353.［12］Hamilton, S. A., Truong, Y. K., Local linear estimation in partly linear model, Journal of Multivariate Analysis, 1997, 60: 1.［13］Mills, T. C., The Econometric Modeling of Financial Time Series, Cambridge: Cambridge University Press, 1993, 137.［14］Engle, R. F., Autoregressive conditional heteroscedasticity with estimates of United Kingdom inflation, Econometrica, 1982, 50: 987.［15］Bera, A. K., Higgins, M. L., A survey of ARCH models: properties of estimation and testing, Journal of Economic
Error rates, PCR recombination, and sampling depth in HIV-1 whole genome deep sequencing.
Zanini, Fabio; Brodin, Johanna; Albert, Jan; Neher, Richard A
2016-12-27
Deep sequencing is a powerful and cost-effective tool to characterize the genetic diversity and evolution of virus populations. While modern sequencing instruments readily cover viral genomes many thousand fold and very rare variants can in principle be detected, sequencing errors, amplification biases, and other artifacts can limit sensitivity and complicate data interpretation. For this reason, the number of studies using whole genome deep sequencing to characterize viral quasi-species in clinical samples is still limited. We have previously undertaken a large scale whole genome deep sequencing study of HIV-1 populations. Here we discuss the challenges, error profiles, control experiments, and computational test we developed to quantify the accuracy of variant frequency estimation.
Convergence Rates of Wavelet Estimators in Semiparametric Regression Models Under NA Samples
Hongchang HU; Li WU
2012-01-01
Consider the following heteroscedastic semiparametric regression model:yi =XTiβ + g(ti) + σiei, 1 ＜ i ≤ n,where {Xi,1 ＜ i ＜ n} are random design points,errors {ei,1 ＜ i ＜ n} are negatively associated (NA) random variables,σ2i =h(ui),and {ui} and {ti} are two nonrandom sequences on [0,1].Some wavelet estimators of the parametric component β,the nonparametric component g(t) and the variance function h(u) are given.Under some general conditions,the strong convergence rate of these wavelet estimators is O(n -1/3 log n).Hence our results are extensions of those results on independent random error settings.
Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear
2011-07-01
A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)
Fischer, Herbert Felix; Tritt, Karin; Klapp, Burghard F; Fliege, Herbert
2010-08-01
The ICD-10-Symptom-Rating (ISR) is a self-rating questionnaire for patients. According to its conceptualization, the instrument was developed to closely represent the syndrome structure of the ICD-10 while assessing the extent of psychological distress an individual suffers from. The results of different factor analyses testing the postulated syndrome structure as well as item and scale characteristics are reported here. Data was collected from a consecutive sample of 1 057 psychosomatic patients of the University Hospital Charité Berlin. Evaluation of the dimensional structure of the questionnaire included exploratory and confirmatory factor analyses each computed with a randomized half of the sample. Multi-Sample-Analyses with different subgroups of the sample were performed to test the stability of the factor structure. The individual factors were constituted by the postulated syndrome units of the ICD-10 involving a high and uniform distribution of accounted variance. They also proved themselves satisfactorily stable over the different subsamples. The scales showed a high degree of internal consistency with relatively small gender and age effects, while psychological disorders had a large effect on the means of the scales. Taking a perspective of test theory, the ICD-10-Symptom-Rating is in accordance with the syndrome structure of the ICD-10 and suitable for the assessment of psychological symptoms. Other aspects pertaining to the reliability and validity of the ISR remain to be proven in future research.
Isner-Horobeti, M E; Charton, A; Daussin, F; Geny, B; Dufour, S P; Richard, R
2014-05-01
Microbiopsies are increasingly used as an alternative to the standard Bergström technique for skeletal muscle sampling. The potential impact of these two different procedures on mitochondrial respiration rate is unknown. The objective of this work was to compare microbiopsies versus Bergström procedure on mitochondrial respiration in skeletal muscle. 52 vastus lateralis muscle samples were obtained from 13 anesthetized pigs, either with a Bergström [6 gauges (G)] needle or with microbiopsy needles (12, 14, 18G). Maximal mitochondrial respiration (V GM-ADP) was assessed using an oxygraphic method on permeabilized fibers. The weight of the muscle samples and V GM-ADP decreased with the increasing gauge of the needles. A positive nonlinear relationship was observed between the weight of the muscle sample and the level of maximal mitochondrial respiration (r = 0.99, p respiration (r = 0.99, p respiration compared to the standard Bergström needle.Therefore, the higher the gauge (i.e. the smaller the size) of the microbiopsy needle, the lower is the maximal rate of respiration. Microbiopsies of skeletal muscle underestimate the maximal mitochondrial respiration rate, and this finding needs to be highlighted for adequate interpretation and comparison with literature data.
Van Sistine, Angela; Salzer, John J.; Sugden, Arthur; Giovanelli, Riccardo; Haynes, Martha P.; Janowiecki, Steven; Jaskot, Anne E.; Wilcots, Eric M.
2016-06-01
The ALFALFA Hα survey utilizes a large sample of H i-selected galaxies from the ALFALFA survey to study star formation (SF) in the local universe. ALFALFA Hα contains 1555 galaxies with distances between ˜20 and ˜100 Mpc. We have obtained continuum-subtracted narrowband Hα images and broadband R images for each galaxy, creating one of the largest homogeneous sets of Hα images ever assembled. Our procedures were designed to minimize the uncertainties related to the calculation of the local SF rate density (SFRD). The galaxy sample we constructed is as close to volume-limited as possible, is a robust statistical sample, and spans a wide range of galaxy environments. In this paper, we discuss the properties of our Fall sample of 565 galaxies, our procedure for deriving individual galaxy SF rates, and our method for calculating the local SFRD. We present a preliminary value of log(SFRD[M ⊙ yr-1 Mpc-3]) = -1.747 ± 0.018 (random) ±0.05 (systematic) based on the 565 galaxies in our Fall sub-sample. Compared to the weighted average of SFRD values around z ≈ 2, our local value indicates a drop in the global SFRD of a factor of 10.2 over that lookback time.
Evaluation of coral pathogen growth rates after exposure to atmospheric African dust samples
Lisle, John T.; Garrison, Virginia H.; Gray, Michael A.
2014-01-01
Laboratory experiments were conducted to assess if exposure to atmospheric African dust stimulates or inhibits the growth of four putative bacterial coral pathogens. Atmospheric dust was collected from a dust-source region (Mali, West Africa) and from Saharan Air Layer masses over downwind sites in the Caribbean [Trinidad and Tobago and St. Croix, U.S. Virgin Islands (USVI)]. Extracts of dust samples were used to dose laboratory-grown cultures of four putative coral pathogens: Aurantimonas coralicida (white plague type II), Serratia marcescens (white pox), Vibrio coralliilyticus, and V. shiloi (bacteria-induced bleaching). Growth of A. coralicida and V. shiloi was slightly stimulated by dust extracts from Mali and USVI, respectively, but unaffected by extracts from the other dust sources. Lag time to the start of log-growth phase was significantly shortened for A. coralicida when dosed with dust extracts from Mali and USVI. Growth of S. marcescens and V. coralliilyticus was neither stimulated nor inhibited by any of the dust extracts. This study demonstrates that constituents from atmospheric dust can alter growth of recognized coral disease pathogens under laboratory conditions.
Psychometric properties of the self-rating organization scale with adult samples
Takeda T
2016-10-01
Full Text Available Toshinobu Takeda,1 Yui Tsuji,2 Mizuho Ando3 1Department of Clinical Psychology, Ryukoku University, Kyoto, 2School of Psychological Science, Health Sciences University of Hokkaido, Sapporo, 3Graduate School of Comprehensive Human Sciences, University of Tsukuba, Tsukuba, Japan Abstract: Organization skills are defined broadly to include both materials and temporal features. Given its symptoms and neurobiological features, attention-deficit hyperactivity disorder (ADHD should be susceptible to impairment in organization. A valid organization scale is imperative to assess and intervene individuals with ADHD. However, there is no validated organization scale in Japan. Referring to existing scales and clinical experience, the self-rating organization scale (SOS was developed and tested in terms of its psychometric properties with 1,017 adults and students including 47 adults with ADHD. Additionally, cutoffs for disorganization were set for clinical utility. Three factors (materials disorganization, temporal disorganization, and mess were extracted by factor analyses. The index for reliability and validity of the SOS was acceptable. The factor “mess” could reflect the unique aspect of the Japanese environment. Further study is needed to enhance the clinical utility of the SOS. Keywords: adult, attention-deficit hyperactivity disorder, organization, scale
Piedmont, R L; McCrae, R R; Riemann, R; Angleitner, A
2000-03-01
Because of the potential for bias and error in questionnaire responding, many personality inventories include validity scales intended to correct biased scores or identify invalid protocols. The authors evaluated the utility of several types of validity scales in a volunteer sample of 72 men and 106 women who completed the Revised NEO Personality Inventory (NEO-PI-R; P. T. Costa & R. R. McCrae, 1992) and the Multidimensional Personality Questionnaire (MPQ; A. Tellegen, 1978/1982) and were rated by 2 acquaintances on the observer form of the NEO-PI-R. Analyses indicated that the validity indexes lacked utility in this sample. A partial replication (N = 1,728) also failed to find consistent support for the use of validity scales. The authors illustrate the use of informant ratings in assessing protocol validity and argue that psychological assessors should limit their use of validity scales and seek instead to improve the quality of personality assessments.
Pan, Chong; Wang, Hongping; Wang, Jinjun
2013-05-01
This work mainly deals with the proper orthogonal decomposition (POD) time coefficient method used for extracting phase information from quasi-periodic flow. The mathematical equivalence between this method and the traditional cross-correlation method is firstly proved. A two-dimensional circular cylinder wake flow measured by time-resolved particle image velocimetry within a range of Reynolds numbers is then used to evaluate the reliability of this method. The effect of both the sampling rate and Reynolds number on the identification accuracy is finally discussed. It is found that the POD time coefficient method provides a convenient alternative for phase identification, whose feasibility in low-sampling-rate measurement has additional advantages for experimentalists.
Eaton, Samuel H; Cashy, John; Pearl, Jeffrey A; Stein, Daniel M; Perry, Kent; Nadler, Robert B
2013-12-01
We sought to examine a large nationwide (United States) sample of emergency department (ED) visits to determine data related to utilization and costs of care for urolithiasis in this setting. Nationwide Emergency Department Sample was analyzed from 2006 to 2009. All patients presenting to the ED with a diagnosis of upper tract urolithiasis were analyzed. Admission rates and total cost were compared by region, hospital type, and payer type. Numbers are weighted estimates that are designed to approximate the total national rate. An average of 1.2 million patients per year were identified with the diagnosis of urolithiasis out of 120 million visits to the ED annually. Overall average rate of admission was 19.21%. Admission rates were highest in the Northeast (24.88%), among teaching hospitals (22.27%), and among Medicare patients (42.04%). The lowest admission rates were noted for self-pay patients (9.76%) and nonmetropolitan hospitals (13.49%). The smallest increases in costs over time were noted in the Northeast. Total costs were least in nonmetropolitan hospitals; however, more patients were transferred to other hospitals. When assessing hospital ownership status, private for-profit hospitals had similar admission rates compared with private not-for-profit hospitals (16.6% vs 15.9%); however, costs were 64% and 48% higher for ED and inpatient admission costs, respectively. Presentation of urolithiasis to the ED is common, and is associated with significant costs to the medical system, which are increasing over time. Costs and rates of admission differ by region, payer type, and hospital type, which may allow us to identify the causes for cost discrepancies and areas to improve efficiency of care delivery.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Mikkelsen, Kaare B; Lund, Torben E
2013-01-01
The aim of this study is to investigate the effects of sampling rate on Hurst exponents derived from Blood Oxygenation Level Dependent functional Magnetic Resonance Imaging (BOLD fMRI) resting state time series. fMRI measurements were performed on 2 human subjects and a selection of gel phantoms. From these, Hurst exponents were calculated. It was found that low sampling rates induced non-trivial exponents at sharp material transitions, and that Hurst exponents of human measurements had a strong TR-dependence. The findings are compared to theoretical considerations regarding the fractional Gaussian noise model and resampling, and it is found that the implications are problematic. This result should have a direct influence on the way future studies of low-frequency variation in BOLD fMRI data are conducted, especially if the fractional Gaussian noise model is considered. We recommend either using a different model (examples of such are referenced in the conclusion), or standardizing experimental procedures along an optimal sampling rate.
Wiwanitkit, Viroj; Waenlor, Weerachit
2004-01-01
Toxocara species are most common roundworms of Canidae and Felidae. Human toxocariasis develops by ingesting of embryonated eggs in contaminated soil. There is no previous report of Toxocara contamination in the soil samples from the public areas in Bangkok. For this reason our study have been carried out to examine the frequency of Toxocara eggs in public yards in Bangkok, Thailand. A total of 175 sand and clay samples were collected and examined for parasite eggs. According to this study, Toxocara eggs were detected from 10 (5.71%) of 175 soil samples. The high rate of contamination in this study implies the importance of the control of this possible zoonotic disease: control of abandon of dogs and cats, is still necessary.
Wiwanitkit Viroj
2004-01-01
Full Text Available Toxocara species are most common roundworms of Canidae and Felidae. Human toxocariasis develops by ingesting of embryonated eggs in contaminated soil. There is no previous report of Toxocara contamination in the soil samples from the public areas in Bangkok. For this reason our study have been carried out to examine the frequency of Toxocara eggs in public yards in Bangkok, Thailand. A total of 175 sand and clay samples were collected and examined for parasite eggs. According to this study, Toxocara eggs were detected from 10 (5.71% of 175 soil samples. The high rate of contamination in this study implies the importance of the control of this possible zoonotic disease: control of abandon of dogs and cats, is still necessary.
Data reduction in the ITMS system through a data acquisition model with self-adaptive sampling rate
Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Crta. Valencia Km-7, Madrid 28031 (Spain)], E-mail: mariano.ruiz@upm.es; Lopez, JM.; Arcas, G. de [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Crta. Valencia Km-7, Madrid 28031 (Spain); Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid (UPM), Crta. Valencia Km-7, Madrid 28031 (Spain); Melendez, R. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Crta. Valencia Km-7, Madrid 28031 (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain)
2008-04-15
Long pulse or steady state operation of fusion experiments require data acquisition and processing systems that reduce the volume of data involved. The availability of self-adaptive sampling rate systems and the use of real-time lossless data compression techniques can help solve these problems. The former is important for continuous adaptation of sampling frequency for experimental requirements. The latter allows the maintenance of continuous digitization under limited memory conditions. This can be achieved by permanent transmission of compressed data to other systems. The compacted transfer ensures the use of minimum bandwidth. This paper presents an implementation based on intelligent test and measurement system (ITMS), a data acquisition system architecture with multiprocessing capabilities that permits it to adapt the system's sampling frequency throughout the experiment. The sampling rate can be controlled depending on the experiment's specific requirements by using an external dc voltage signal or by defining user events through software. The system takes advantage of the high processing capabilities of the ITMS platform to implement a data reduction mechanism based in lossless data compression algorithms which are themselves based in periodic deltas.
D. M. McEligot; J. R. Wolf; K. P. Nolan; E. J. Walsh; R. J. Volino
2006-05-01
Conditionally-sampled boundary layer data for an accelerating transitional boundary layer have been analyzed to calculate the entropy generation rate in the transition region. By weighing the nondimensional dissipation coefficient for the laminar-conditioned-data and turbulent-conditioned-data with the intermittency factor the average entropy generation rate in the transition region can be determined and hence be compared to the time averaged data and correlations for steady laminar and turbulent flows. It is demonstrated that this method provides, for the first time, an accurate and detailed picture of the entropy generation rate during transition. The data used in this paper have been taken from detailed boundary layer measurements available in the literature. This paper provides, using an intermittency weighted approach, a methodology for predicting entropy generation in a transitional boundary layer.
Edmond J. Walsh; Kevin P. Nolan; Donald M. McEligot; Ralph J. Volino; Adrian Bejan
2007-05-01
Conditionally-sampled boundary layer data for an accelerating transitional boundary layer have been analyzed to calculate the entropy generation rate in the transition region. By weighing the nondimensional dissipation coefficient for the laminar-conditioned-data and turbulent-conditioned-data with the intermittency factor the average entropy generation rate in the transition region can be determined and hence be compared to the time averaged data and correlations for steady laminar and turbulent flows. It is demonstrated that this method provides, for the first time, an accurate and detailed picture of the entropy generation rate during transition. The data used in this paper have been taken from detailed boundary layer measurements available in the literature. This paper provides, using an intermittency weighted approach, a methodology for predicting entropy generation in a transitional boundary layer.
Pfeiffer, Steven L; Petscher, Yaacov; Jarosewich, Tania
This study reports on an analysis of the standardization sample of a rating scale designed to assist in identification of gifted students. The Gifted Rating Scales-Preschool/Kindergarten Form (GRS-P) is based on a multidimensional model of giftedness designed for preschool and kindergarten students. Results provide support for: the internal structure of the scale; no age differences across the 3-year age span 4:0-6:11; gender differences on only one of the five scales; artistic talent; and small but statistically significant race/ethnicity differences with Asian Americans rated, on average, 1.5 scale-score points higher than whites and Native Americans and 7 points higher than African American and Hispanic students. The present findings provide support for the GRS-P as a valid screening test for giftedness.
邓春亮; 胡南辉
2012-01-01
在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性，并得到其收敛于真值βo的速度为Op（λn^-1/2），其中λn（λ^-n）为方阵Sn=n∑i=1XiX^11的最小（最大）特征值．%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els （GLMs）. Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp（λn^-1/2）,λn（^λn） which denotes the smallest （Maximum）eigervalue of the matrixSn =n∑i=1XiX^11,
Einfeld, Wayne; Krauter, Paula A.; Boucher, Raymond M.; Tezak, Mathew; Amidan, Brett G. (Pacific Northwest National Laboratory, Richland, WA); Piepel, Greg F. (Pacific Northwest National Laboratory, Richland, WA)
2011-05-01
Recovery of spores from environmental surfaces is known to vary due to sampling methodology, techniques, spore size and characteristics, surface materials, and environmental conditions. A series of tests were performed to evaluate a new, validated sponge-wipe method. Specific factors evaluated were the effects of contaminant concentrations and surface materials on recovery efficiency (RE), false negative rate (FNR), limit of detection (LOD) - and the uncertainties of these quantities. Ceramic tile and stainless steel had the highest mean RE values (48.9 and 48.1%, respectively). Faux leather, vinyl tile, and painted wood had mean RE values of 30.3, 25.6, and 25.5, respectively, while plastic had the lowest mean RE (9.8%). Results show a roughly linear dependence of surface roughness on RE, where the smoothest surfaces have the highest mean RE values. REs were not influenced by the low spore concentrations tested (3 x 10{sup -3} to 1.86 CFU/cm{sup 2}). The FNR data were consistent with RE data, showing a trend of smoother surfaces resulting in higher REs and lower FNRs. Stainless steel generally had the lowest mean FNR (0.123) and plastic had the highest mean FNR (0.479). The LOD{sub 90} varied with surface material, from 0.015 CFU/cm{sup 2} on stainless steel up to 0.039 on plastic. Selecting sampling locations on the basis of surface roughness and using roughness to interpret spore recovery data can improve sampling. Further, FNR values, calculated as a function of concentration and surface material, can be used pre-sampling to calculate the numbers of samples for statistical sampling plans with desired performance, and post-sampling to calculate the confidence in characterization and clearance decisions.
房祥忠; 陈家鼎
2011-01-01
强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.
Matthias Weippert
2014-10-01
Full Text Available Nonlinear parameters of heart rate variability (HRV have proven their prognostic value in clinical settings, but their physiological background is not very well established. We assessed the effects of low intensity isometric (ISO and dynamic (DYN exercise of the lower limbs on heart rate matched intensity on traditional and entropy measures of HRV. Due to changes of afferent feedback under DYN and ISO a distinct autonomic response, mirrored by HRV measures, was hypothesized. Five-minute inter-beat interval measurements of 43 healthy males (26.0 ± 3.1 years were performed during rest, DYN and ISO in a randomized order. Blood pressures and rate pressure product were higher during ISO vs. DYN (p < 0.001. HRV indicators SDNN as well as low and high frequency power were significantly higher during ISO (p < 0.001 for all measures. Compared to DYN, sample entropy (SampEn was lower during ISO (p < 0.001. Concluding, contraction mode itself is a significant modulator of the autonomic cardiovascular response to exercise. Compared to DYN, ISO evokes a stronger blood pressure response and an enhanced interplay between both autonomic branches. Non-linear HRV measures indicate a more regular behavior under ISO. Results support the view of the reciprocal antagonism being only one of many modes of autonomic heart rate control. Under different conditions; the identical “end product” heart rate might be achieved by other modes such as sympathovagal co-activation as well.
Li, Hongxia; Helm, Paul A; Paterson, Gordon; Metcalfe, Chris D
2011-04-01
The effect of solution pH and levels of dissolved organic matter (DOM) on the sampling rates for model pharmaceuticals and personal care products (PPCPs) and endocrine disrupting substance (EDS) by polar organic chemical integrative samplers (POCIS) was investigated in laboratory experiments. A commercially available POCIS configuration containing neutral Oasis HLB (hydrophilic-lipophilic balance) resin (i.e. pharmaceutical POCIS) and two POCIS configurations prepared in-house containing MAX and MCX anion and cation exchange resins, respectively were tested for uptake of 21 model PPCPs and EDS, including acidic, phenolic, basic and neutral compounds. Laboratory experiments were conducted with dechlorinated tap water over a pH range of 3, 7 and 9. The effects of DOM were studied using natural water from an oligotrophic lake in Ontario, Canada (i.e. Plastic Lake) spiked with different amounts of DOM (the concentration of dissolved organic carbon ranged from 3 to 5mgL(-1) in uptake experiments). In experiments with the commercial (HLB) POCIS, the MCX-POCIS and the MAX-POCIS, the sampling rates generally increased with pH for basic compounds and declined with pH for acidic compounds. However, the sampling rates were relatively constant across the pH range for phenols with high pKa values (i.e. bisphenol A, estrone, estradiol, triclosan) and for the neutral pharmaceutical, carbamazepine. Thus, uptake was greatest when the amount of the neutral species in solution was maximized relative to the ionized species. Although the solution pH affected the uptake of some model ionic compounds, the effect was by less than a factor of 3. There was no significant effect of DOM on sampling rates from Plastic Lake. However, uptake rates in different aqueous matrixes declined in the order of deionized water>Plastic Lake water>dechlorinated tap water, so other parameters must affect uptake into POCIS, although this influence will be minor. MAX-POCIS and MCX-POCIS showed little advantage
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Nagy, Tamás; van Lien, René; Willemsen, Gonneke; Proctor, Gordon; Efting, Marieke; Fülöp, Márta; Bárdos, György; Veerman, Enno C I; Bosch, Jos A
2015-07-01
Salivary alpha-amylase (sAA) is used as a sympathetic (SNS) stress marker, though its release is likely co-determined by SNS and parasympathetic (PNS) activation. The SNS and PNS show asynchronous changes during acute stressors, and sAA responses may thus vary with sample timing. Thirty-four participants underwent an eight-minute memory task (MT) and cold pressor task (CPT). Cardiovascular SNS (pre-ejection period, blood pressure) and PNS (heart rate variability) activity were monitored continuously. Unstimulated saliva was collected repeatedly during and after each laboratory stressor, and sAA concentration (U/ml) and secretion (U/minute) determined. Both stressors increased anxiety. The MT caused an immediate and continued cardiac SNS activation, but sAA concentration increased at task cessation only (+54%); i.e., when there was SNS-PNS co-activation. During the MT sAA secretion even decreased (-35%) in conjunction with flow rate and vagal tone. The CPT robustly increased blood pressure but not sAA. In summary, sAA fluctuations did not parallel changes in cardiac SNS activity or anxiety. sAA responses seem contingent on sample timing and flow rate, likely involving both SNS and PNS influences. Verification using other stressors and contexts seems warranted.
Hasan M. Khan; M. Ismail; K. Khan; P. Akhter
2011-01-01
The analysis of naturally occurring radionuclides (226Ra, 232Th and 40K) and an anthropogenic radionuclide 137Cs is carried out in some soil samples collected from Kohistan district of N.W.F.P. (Pakistan), using gamma-ray spectrometry. The gamma spectrometry is operated using a high purity Germanium (HPGe) detector coupled with a computer based high resolution multi channel analyzer. The speciSc activity in soil ranges from 24.72 to 78.48Bq·kg-1 for 226Ra, 21.73 to 75.28Bq·kg-1 for 232Th, 7.06 to 14.9Bq·kg-1 for l37Cs and 298.46 to 570.77Bq·kg-1 for 40K with the mean values of 42.11, 43.27, 9.5 and 418.27Bq·kg-1, respectively. The radium equivalent activity in all the soil samples is lower than the safe limit set in the OECD report (370Bq·kg-1). Man-made radionuclide 137Cs is also present in detectable amount in all soil samples. Presence of 137Cs indicates that the samples in this remote area also receive some fallout from nuclear accident in Chernobyl power plant in 1986. The internal and external hazard indices have the mean values of 0.48 and 0.37 respectively. Absorbed dose rates and effective dose equivalents are also determined for the samples. The concentration of radionuclides found in the soil samples during the present study is nominal and does not pose any potential health hazard to the general public.
Jamil, K.; Ali, S. [Environmental Radiation Group, Radiation Physics Division, Pinstech, P. O. Nilore, Islamabad (Pakistan); Iqbal, M. [Nuclear Engineering Division, Pinstech, P. O. Nilore, Islamabad (Pakistan); Qureshi, A.A.; Khan, H.A. [Environmental Radiation Group, Radiation Physics Division, Pinstech, P. O. Nilore, Islamabad (Pakistan)
1998-11-01
The radionuclides present in coal may not only be a health hazard for the coal miners but also may be a threat to the general population if these radionuclides disperse in the environment. This research has been conducted to quantify the radionuclides present in the coal samples from various coal-mines of two provinces, Punjab and Balochistan of Pakistan. In this regard, a high-purity Ge-detector-based {gamma}-spectrometer was used. The maximum activity concentrations for {sup 226}Ra, {sup 232}Th and {sup 40}K were found to be 31{center_dot}4{+-}3{center_dot}0, 32{center_dot}7{+-}3{center_dot}2 and 21{center_dot}4{+-}5{center_dot}0 Bq kg{sup -1}, respectively. A theoretical model to compute external {gamma}-ray dose rate from a coal-mine surface was developed. The Monte Carlo simulation was employed to compute the required mass attenuation coefficients corresponding to the various {gamma}-ray energies from {sup 226}Ra, {sup 232}Th, their progeny and {sup 40}K present in the coal samples. In addition, the effective thickness of coal slab for self-absorption was also computed using the Monte Carlo Neutron Photon (MCNP) transport code. The computed external {gamma}-ray dose rate has been found to be much below the dose ratelimits for occupational persons as well as for the general population. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)
Tang, Yongqiang
2015-01-01
A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
Jha, Ashish Kumar
2015-01-01
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Suvaila, Rares; Osvath, Iolanda; Sima, Octavian
2013-11-01
In this work a method for the evaluation of the activity when a point source containing (60)Co is located in an unknown position within a sample is developed. The method can be applied if the count rate in the 2,505 keV sum peak has an acceptable uncertainty. It is based on the correlation between the apparent efficiency for the 1,173 keV peak and the ratio of the count rate in the sum peak of 2,505 keV and in the 1,332 keV peak. The correlation was observed in the measurements of a (60)Co point source located in various positions in a soil sample. The measurements were done with a 47% efficiency n-type HPGe detector. The correlation is also observed in the measurements and simulations done with a Compton-suppressed spectrometer having a 100% n-type HPGe detector. The results obtained with the proposed method are less affected by the uncertainty of the position of the point source than the results obtained using the standard methods of activity evaluation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sutor, Malinda M.; Dagg, Michael J.
2008-06-01
The effects of vertical sampling resolution on estimates of plankton biomass and grazing calculations were examined using data collected in two different areas with vertically stratified water columns. Data were collected from one site in the upwelling region off Oregon and from four sites in the Northern Gulf of Mexico, three within the Mississippi River plume and one in adjacent oceanic waters. Plankton were found to be concentrated in discrete layers with sharp vertical gradients at all the stations. Phytoplankton distributions were correlated with gradients in temperature and salinity, but microzooplankton and mesozooplankton distributions were not. Layers of zooplankton were sometimes collocated with layers of phytoplankton, but this was not always the case. Simulated calculations demonstrate that when averages are taken over the water column, or coarser scale vertical sampling resolution is used, biomass and mesozooplankton grazing and filtration rates can be greatly underestimated. This has important implications for understanding the ecological significance of discrete layers of plankton and for assessing rates of grazing and production in stratified water columns.
Juul, Malene; Bertl, Johanna; Guo, Qianyun; Nielsen, Morten Muhlig; Świtnicki, Michał; Hornshøj, Henrik; Madsen, Tobias; Hobolth, Asger; Pedersen, Jakob Skou
2017-01-01
Non-coding mutations may drive cancer development. Statistical detection of non-coding driver regions is challenged by a varying mutation rate and uncertainty of functional impact. Here, we develop a statistically founded non-coding driver-detection method, ncdDetect, which includes sample-specific mutational signatures, long-range mutation rate variation, and position-specific impact measures. Using ncdDetect, we screened non-coding regulatory regions of protein-coding genes across a pan-cancer set of whole-genomes (n = 505), which top-ranked known drivers and identified new candidates. For individual candidates, presence of non-coding mutations associates with altered expression or decreased patient survival across an independent pan-cancer sample set (n = 5454). This includes an antigen-presenting gene (CD1A), where 5’UTR mutations correlate significantly with decreased survival in melanoma. Additionally, mutations in a base-excision-repair gene (SMUG1) correlate with a C-to-T mutational-signature. Overall, we find that a rich model of mutational heterogeneity facilitates non-coding driver identification and integrative analysis points to candidates of potential clinical relevance. DOI: http://dx.doi.org/10.7554/eLife.21778.001 PMID:28362259
Khishvand, Mahdi; Akbarabadi, Morteza; Piri, Mohammad
2016-08-01
We present the results of a pore-scale experimental study of residual trapping in consolidated sandstone and carbonate rock samples under confining stress. We investigate how the changes in wetting phase flow rate impacts pore-scale distribution of fluids during imbibition in natural, water-wet porous media. We systematically study pore-scale trapping of the nonwetting phase as well as size and distribution of its disconnected globules. Seven sets of drainage-imbibition experiments were performed with brine and oil as the wetting and nonwetting phases, respectively. We utilized a two-phase miniature core-flooding apparatus integrated with an X-ray microtomography system to examine pore-scale fluid distributions in small Bentheimer sandstone (D = 4.9 mm and L = 13 mm) and Gambier limestone (D = 4.4 mm and L = 75 mm) core samples. The results show that with increase in capillary number, the residual oil saturation at the end of the imbibition reduces from 0.46 to 0.20 in Bemtheimer sandstone and from 0.46 to 0.28 in Gambier limestone. We use pore-scale displacement mechanisms, in-situ wettability characteristics, and pore size distribution information to explain the observed capillary desaturation trends. The reduction was believed to be caused by alteration of the order in which pore-scale displacements took place during imbibition. Furthermore, increase in capillary number produced significantly different pore-scale fluid distributions during imbibition. We explored the pore fluid occupancies and studied size and distribution of the trapped oil clusters during different imbibition experiments. The results clearly show that as the capillary number increases, imbibition produces smaller trapped oil globules. In other words, the volume of individual trapped oil globules decreased at higher brine flow rates. Finally, we observed that the pore space in the limestone sample was considerably altered through matrix dissolution at extremely high brine flow rates. This
Yadav, Manjulata; Prasad, Mukesh; Joshi, Veena; Gusain, G S; Ramola, R C
2016-10-01
Soil is the most important factor affecting the radon level in the human living environments. It depends not only on uranium and thorium contents but also on the physical and chemical properties of the soil. In this paper, the measurements of radium content and mass exhalation rate of radon from the soil samples collected from Uttarkashi area of Garhwal Himalaya are presented. The correlation between radium content and radon mass exhalation rate from soil has also been obtained. The radium was measured by gamma ray spectrometry, while the mass exhalation rate of radon has been determined by both active and passive methods. The radium activity in the soil of study area was found to vary from 45±7 to 285±29 Bq kg(-1) with an average of 99 Bq kg(-1) The radon mass exhalation rate was found to vary from 0.59 × 10(-5) to 2.2 × 10(-5) Bq kg(-1) h(-1) with an average of 1.4 × 10(-5) Bq kg(-1) h(-1) by passive technique and from 0.8 × 10(-5) to 3.2 × 10(-5) Bq kg(-1) h(-1) with an average of 1.5 × 10(-5) Bq kg(-1) h(-1) by active technique. The results suggest that the measured radium value is positively correlated with the radon mass exhalation rate measured with both the active and passive techniques. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Duke, Pauline; Godwin, Marshall; Ratnam, Samuel; Dawson, Lesa; Fontaine, Daniel; Lear, Adrian; Traverso-Yepez, Martha; Graham, Wendy; Ravalia, Mohamad; Mugford, Gerry; Pike, Andrea; Fortier, Jacqueline; Peach, Mandy
2015-06-10
Cervical cancer is highly preventable and treatable if detected early through regular screening. Women in the Canadian province of Newfoundland & Labrador have relatively low rates of cervical cancer screening, with rates of around 40 % between 2007 and 2009. Persistent infection with oncogenic human papillomavirus (HPV) is a necessary cause for the development of cervical cancer, and HPV testing, including self-sampling, has been suggested as an alternative method of cervical cancer screening that may alleviate some barriers to screening. Our objective was to determine whether offering self-collected HPV testing screening increased cervical cancer screening rates in rural communities. During the 2-year study, three community-based cohorts were assigned to receive either i) a cervical cancer education campaign with the option of HPV testing; ii) an educational campaign alone; iii) or no intervention. Self-collection kits were offered to eligible women at family medicine clinics and community centres, and participants were surveyed to determine their acceptance of the HPV self-collection kit. Paired proportions testing for before-after studies was used to determine differences in screening rates from baseline, and Chi Square analysis of three dimensional 2 × 2 × 2 tables compared the change between communities. Cervical cancer screening increased by 15.2 % (p cancer screening rate in a rural NL community. Women who completed self-collection had generally positive feelings about the experience. Offering HPV self-collection may increase screening compliance, particularly among women who do not present for routine Pap smears.
Bach, Bo; Anderson, Jaime; Simonsen, Erik
2017-01-01
The Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) Section III offers an alternative model for the diagnosis of personality disorders (PDs), including 25 pathological personality trait facets organized into 5 trait domains. To maintain continuity with the categorical PD...... diagnoses found in DSM-5 Section II, specified sets of facets are configured into familiar PD types. The current study aimed to evaluate the continuity across the Section II and III models of PDs. A sample of 142 psychiatric outpatients were administered the Personality Inventory for DSM-5 and rated...... with the Structured Clinical Interview for the DSM-IV Axis II disorders. We investigated whether the DSM-5 Section III facet-profiles would be associated with their respective Section II counterparts, as well as determining whether additional facets could augment the prediction of the Section II disorders. Results...
Fischer, Herbert Felix; Schirmer, Nicole; Tritt, Karin; Klapp, Burghard F; Fliege, Herbert
2011-03-01
Assessment of the retest-reliability and sensitivity to change of the ICD-10-Symptom-Rating (ISR) is provided. The ISR was filled out repeatedly by a non-clinical as well as different samples of psychosomatic patients. Between the two measurements either no or an integrated psychosomatic treatment took place. During the treatment free phase a high degree of stability of the test scores was expected, whereas a significant improvement of test scores was expected for the respective scales over the treatment phase. The retest-reliability for the individual scales ranges from 0.70 to 0.94. Between admission to a psychosomatic treatment and discharge significant differences were found for all scales. The retest-reliability showed satisfactory results comparable to similar, symptom-oriented instruments. Furthermore, the instruments reproduces symptomatic changes consistently and is - from our point of view - suitable for the assessment of change.
Yang, Yongheng; Zhou, Keliang; Blaabjerg, Frede
2016-01-01
the instantaneous grid information (e.g., frequency and phase of the grid voltage) for the current control, which is commonly performed by a Phase-Locked-Loop (PLL) system. Hence, harmonics and deviations in the estimated frequency by the PLL could lead to current tracking performance degradation, especially...... for the periodic signal controllers (e.g., PR and RC) with a fixed sampling rate. In this paper, the impacts of frequency deviations induced by the PLL and/or the grid disturbances on the selected current controllers are investigated by analyzing the frequency adaptability of these current controllers....... Subsequently, strategies to enhance the frequency adaptability of the current controllers are proposed for the power converters to produce high quality feed-in currents even in the presence of grid frequency deviations. Specifically, by feeding back the PLL estimated frequency to update the center frequencies...
Kirkwood, Michael W; Kirk, John W
2010-01-01
Performance on the Medical Symptom Validity Test (MSVT) was examined in 193 consecutively referred patients aged 8 through 17 years who had sustained a mild traumatic brain injury. A total of 33 participants failed to meet actuarial criteria for valid effort on the MSVT. After accounting for possible false positives and false negatives, the base rate of suboptimal effort in this clinical sample was 17%. Only one MSVT failure was thought to be influenced by litigation. The present results suggest that a sizable minority of children is capable of putting forth suboptimal effort during neuropsychological exam, even when external incentives are not readily apparent. The MSVT appears to have good potential value as an objective measure for detecting symptom invalidity in school-age youth.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Shivaei, Irene; Steidel, Charles C; Shapley, Alice E
2015-01-01
We use a sample of 262 spectroscopically confirmed star-forming galaxies at redshifts $2.08\\leq z\\leq 2.51$ to compare H$\\alpha$, UV, and IR star-formation-rate diagnostics and to investigate the dust properties of the galaxies. At these redshifts, the H$\\alpha$ line shifts to the $K_{s}$-band. By comparing $K_{s}$-band photometry to underlying stellar population model fits to other UV, optical, and near-infrared data, we infer the H$\\alpha$ flux for each galaxy. We obtain the best agreement between H$\\alpha$- and UV-based SFRs if we assume that the ionized gas and stellar continuum are reddened by the same value and that the Calzetti attenuation curve is applied to both. Aided with MIPS 24$\\mu$m data, we find that an attenuation curve steeper than the Calzetti curve is needed to reproduce the observed IR/UV ratios of galaxies younger than 100 Myr. Furthermore, using the bolometric star-formation rate inferred from the UV and mid-IR data (SFR$_{IR}$+SFR$_{UV}$), we calculated the conversion between the H$\\alp...
Weber, Stefan; Waller, Erik H; Kaiser, Christoph; von Freymann, Georg
2017-06-26
We present a real-time measurement technique, based on time-stretching for measuring the temporal dynamic of ultrafast absorption variations with a sampling-rate of up to 1.1 TS/s. The single-shot captured data are stretched in a resonator-based time-stretch system with a variable stretch-factor of up to 13.8. The time-window of the time-stretch system for capturing the signal of interest is about 800 ps with an update-rate of 10 MHz. An adapted optical backpropagation algorithm is introduced for reconstructing the original unstretched event. As proof-of-principle the temporal characteristic of a picosecond semiconductor saturable absorber mirror is measured: The real-time results agree well with the results of a conventional pump-probe experiment. The time-stretch technique potentially allows to gain access to a large field of ultrafast absorption variations like semiconductor charge carrier dynamics, irreversible polymerization processes, and saturable absorber materials.
Hines, Denise A; Douglas, Emily M; Mahmood, Sehar
2010-11-01
Research using Internet surveys is an emerging field, yet research on the legitimacy of using Internet studies, particularly those targeting sensitive topics, remains under-investigated. The current study builds on the existing literature by exploring the demographic differences between Internet panel and RDD telephone survey samples, as well as differences in responses with regard to experiences of intimate partner violence perpetration and victimization, alcohol and substance use/abuse, PTSD symptomatology, and social support. Analyses indicated that after controlling for demographic differences, there were few differences between the samples in their disclosure of sensitive information, and that the online sample was more socially isolated than the phone sample. Results are discussed in terms of their implications for using Internet samples in research on sensitive topics.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
Valter Abrantes Pereira da Silva
2007-03-01
Full Text Available OBJETIVO: O presente estudo objetivou comparar os valores de freqüência cardíaca máxima (FCmáx medidos durante um teste de esforço progressivo (TEP, com os obtidos através de equações de predição, em idosas brasileiras. MÉTODOS: Um TEP máximo sob o protocolo modificado de Bruce, realizado em esteira, foi utilizado para obtenção dos valores de referência da freqüência cardíaca máxima (FCmáx, em 93 mulheres idosas (67,1±5,16 anos. Os valores obtidos foram comparados aos estimados pelas equações "220 - idade" e a de Tanaka e cols., através da ANOVA, para amostras repetidas. A correlação e a concordância entre os valores medidos e os estimados foram testadas. Adicionalmente, a correlação entre os valores de FCmáx medidos e a idade das voluntárias foi examinada. RESULTADOS: Os resultados foram os seguintes: 1 a média da FCmáx atingida no TEP foi de 145,5±12,5 batimentos por minuto (bpm; 2 as equações "220 - idade" e a de Tanaka e cols. (2001 superestimaram significativamente (p OBJECTIVE: This study sought to compare maximum heart rate (HRmax values measured during a graded exercise test (GXT with those calculated from prediction equations in Brazilian elderly women. METHODS: A treadmill maximal graded exercise test in accordance with the modified Bruce protocol was used to obtain reference values for maximum heart rate (HRmax in 93 elderly women (mean age 67.1 ± 5.16. Measured values were compared with those estimated from the "220 - age" and Tanaka et al formulas using repeated-measures ANOVA. Correlation and agreement between measured and estimated values were tested. Also evaluated was the correlation between measured HRmax and volunteers’ age. RESULTS: Results were as follows: 1 mean HRmax reached during GXT was 145.5 ± 12,5 beats per minute (bpm; 2 both the "220 - age" and Tanaka et al (2001 equations significantly overestimated (p < 0.001 HRmax by a mean difference of 7.4 and 15.5 bpm, respectively; 3
Yuan-Hong Jiang
Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.
Mélachio, Tanekou Tito Trésor; Njiokou, Flobert; Ravel, Sophie; Simo, Gustave; Solano, Philippe; De Meeûs, Thierry
2015-07-01
Human and animal trypanosomiases are two major constraints to development in Africa. These diseases are mainly transmitted by tsetse flies in particular by Glossina palpalis palpalis in Western and Central Africa. To set up an effective vector control campaign, prior population genetics studies have proved useful. Previous studies on population genetics of G. p. palpalis using microsatellite loci showed high heterozygote deficits, as compared to Hardy-Weinberg expectations, mainly explained by the presence of null alleles and/or the mixing of individuals belonging to several reproductive units (Wahlund effect). In this study we implemented a system of trapping, consisting of a central trap and two to four satellite traps around the central one to evaluate a possible role of the Wahlund effect in tsetse flies from three Cameroon human and animal African trypanosomiases foci (Campo, Bipindi and Fontem). We also estimated effective population sizes and dispersal. No difference was observed between the values of allelic richness, genetic diversity and Wright's FIS, in the samples from central and from satellite traps, suggesting an absence of Wahlund effect. Partitioning of the samples with Bayesian methods showed numerous clusters of 2-3 individuals as expected from a population at demographic equilibrium with two expected offspring per reproducing female. As previously shown, null alleles appeared as the most probable factor inducing these heterozygote deficits in these populations. Effective population sizes varied from 80 to 450 individuals while immigration rates were between 0.05 and 0.43, showing substantial genetic exchanges between different villages within a focus. These results suggest that the "suppression" with establishment of physical barriers may be the best strategy for a vector control campaign in this forest context.
ChaoHan; BenYang; WenShuZuo; YanSongLiu; GangZheng; LiYang; MeiZhuZheng
2016-01-01
Background:Although sentinel lymph node biopsy (SLNB) can accurately predict the status of axillary lymph node (ALN) metastasis, the high false‑negative rate (FNR) of SLNB is still the main obstacle for the treatment of patients who receive SLNB instead of ALN dissection (ALND). The purpose of this study was to evaluate the clinical signiifcance of SLNB combined with peripheral lymph node (PLN) sampling for reducing the FNR for breast cancer and to discuss the effect of “skip metastasis” on the FNR of SLNB. Methods:At Shandong Cancer Hospital Affliated to Shandong University between March 1, 2012 and June 30, 2015, the sentinel lymph nodes (SLNs) of 596 patients with breast cancer were examined using radiocolloids with blue dye tracer. First, the SLNs were removed; then, the area surrounding the original SLNs was selected, and the visible lymph nodes in a ifeld of 3–5cm in diameter around the center (i.e., PLNs) were removed, avoiding damage to the structure of the breast. Finally, ALND was performed. The SLNs, PLNs, and remaining ALNs underwent pathologic examination, and the relationship between them was analyzed. Results:The identiifcation rate of SLNs in the 596 patients was 95.1% (567/596); the metastasis rate of ALNs was 33.7% (191/567); the FNR of pure SLNB was 9.9% (19/191); and after the SLNs and PLNs were eliminated, the FNR was 4.2% (8/191), which was signiifcantly decreased compared with the FNR before removal of PLNs (P=0.028). According to the detected number (N) of SLNs, the patients were divided into four groups of N=1, 2, 3, and≥4; the FNR in these groups was 19.6, 9.8, 7.3, and 2.3%, respectively. For the patients with≤2 or≤3 detected SLNs, the FNR after removal of PLNs was signiifcantly decreased compared with that before removal of PLNs (N≤2: 14.0% vs. 4.7%, P=0.019; N≤3: 12.2% vs. 4.7%,P=0.021), whereas for patients with≥4 detected SLNs, the decrease in FNR was not statistically signiifcant (P=1.000). In the entire cohorts
FAN Kuan-lu; ZHANG Hai-feng; ZHU Zhen-yan; YAO Wen-ming; SHEN Jie; LIANG Ning-xia; GONG Lei
2013-01-01
Background Reactive oxygen species are thought to contribute to the development of renal damage.The P22phox subunit of nicotinamide adenine dinucleotide phosphate (NAPDH) oxidase,encoded by the cytochrome b245α polypeptide gene,CYBA,plays a key role in superoxide anion production.We investigated the association of CYBA rs7195830 polymorphism with estimated glomerular filtration rate (eGFR) and the role it plays in the pathogenesis of chronic kidney disease (CKD) in a Han Chinese sample.Methods The Gaoyou study enrolled 4473 participants.Serum levels of creatinine were measured and eGFR was estimated using the Chronic Kidney Disease Epidemiology Collaboration equations.The CYBA polymorphisms were genotyped.Then we investigated the association between eGFR and the rs7195830 polymorphism in the recessive model.Results The AA genotype of rs7195830 was associated with significantly lower values of eGFR compared with the GG and AG genotypes ((102.76±17.07) ml·min-1·1.73 m2 vs.(105.08±16.30) ml·min-1·1.73 m-2).The association remained significant in the recessive model after adjusting for age,gender,body mass index,smoking,hypertension,diabetes mellitus,uric acid,triglyceride,low density lipoprotein cholesterol and high density lipoprotein cholesterol (β=1.666,P=0.031).The rs7195832 AA genotype was an independent risk factor for CKD:eGFR ＜60 ml·min-1·1.73 m2 (odds ratio=3.32; 95％ C/=1.21-9.13).Conclusion The AA genotype of rs7195830 is independently associated with lower estimated glomerular filtration rate and is significantly associated with CKD.
Nikulin, V.; Sofka, J.; Khandekar, R.
2005-08-01
Laser technology plays an ever-increasing role in aerospace and communication systems and is often viewed as a technology that has the potential for providing the material base for high-bandwidth applications. Laser provides the most logical connectivity channel for mobile systems requiring high data rates, low power consumption, covert operation, and high resistance to jamming. While advancements in modern opto-electronics have resulted in small size, reliable and power efficient lasers and modulators, successful operation of any communication technology hinges upon the ability to develop an equally advanced beam steering/positioning system. In many aerospace applications, when the transmitting optical platform is placed on board of an airplane, the ability to track the target is affected by the complex high-speed maneuvers performed by the aircraft and the resident vibration of the airframe. The tracking system must assure that in spite of the relative motion of both the transmitting and receiving stations and adverse environments, such as vibration, mutual alignment of two systems will be maintained to minimize communication errors. The work presented in this paper concentrates on the development of agile beam steering systems for laser communication terminals. Acousto-optic Bragg cells are used as deflectors while feedback information is generated by a quadrant detector. The control system is synthesized using a relatively simple constant-gain controller augmented with an adaptive Kalman filter to mitigate the effects of measurement noise in the tracking system. Laboratory experiments are conducted to investigate communication performance as a function of the sampling rate in the beam position feedback.
2016-01-01
Modeling and prediction of polar organic chemical integrative sampler (POCIS) sampling rates (Rs) for 73 compounds using artificial neural networks (ANNs) is presented for the first time. Two models were constructed: the first was developed ab initio using a genetic algorithm (GSD-model) to shortlist 24 descriptors covering constitutional, topological, geometrical and physicochemical properties and the second model was adapted for Rs prediction from a previous chromatographic retention model (RTD-model). Mechanistic evaluation of descriptors showed that models did not require comprehensive a priori information to predict Rs. Average predicted errors for the verification and blind test sets were 0.03 ± 0.02 L d–1 (RTD-model) and 0.03 ± 0.03 L d–1 (GSD-model) relative to experimentally determined Rs. Prediction variability in replicated models was the same or less than for measured Rs. Networks were externally validated using a measured Rs data set of six benzodiazepines. The RTD-model performed best in comparison to the GSD-model for these compounds (average absolute errors of 0.0145 ± 0.008 L d–1 and 0.0437 ± 0.02 L d–1, respectively). Improvements to generalizability of modeling approaches will be reliant on the need for standardized guidelines for Rs measurement. The use of in silico tools for Rs determination represents a more economical approach than laboratory calibrations. PMID:27363449
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Charlestra, Lucner; Amirbahman, Aria; Courtemanch, David L; Alvarez, David A; Patterson, Howard
2012-10-01
The polar organic chemical integrative sampler (POCIS) was calibrated to monitor pesticides in water under controlled laboratory conditions. The effect of natural organic matter (NOM) on the sampling rates (R(s)) was evaluated in microcosms containing 0.05). However, flow velocity and turbulence significantly increased the sampling rates of the pesticides in the FTS and SBE compared to the QBE (p POCIS deployed in stagnant and turbulent environmental systems without correction for NOM. Copyright © 2012 Elsevier Ltd. All rights reserved.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Kontogianni, Meropi D; Vidra, Nikoletta; Farmaki, Anastasia-Eleni; Koinaki, Stella; Belogianni, Katerina; Sofrona, Stavroula; Magkanari, Flora; Yannakoulia, Mary
2008-01-01
Data from studies in pediatric samples exploring adherence to the Mediterranean diet are scarce. The aim of the present work was to explore adherence to a Mediterranean diet pattern in a representative sample of Greek children and adolescents. The study sample (n = 1305, 3-18 y) was representative o
Kennedy, Karen, E-mail: k.kennedy@uq.edu.a [University of Queensland, EnTox (National Research Centre for Environmental Toxicology), 39 Kessels Rd., Coopers Plains QLD 4108 (Australia); Hawker, Darryl W. [Griffith University, School of Environment, Nathan QLD 4111 (Australia); Bartkow, Michael E. [University of Queensland, EnTox (National Research Centre for Environmental Toxicology), 39 Kessels Rd., Coopers Plains QLD 4108 (Australia); Carter, Steve [Queensland Health Forensic and Scientific Services, Coopers Plains QLD 4108 (Australia); Ishikawa, Yukari; Mueller, Jochen F. [University of Queensland, EnTox (National Research Centre for Environmental Toxicology), 39 Kessels Rd., Coopers Plains QLD 4108 (Australia)
2010-01-15
Performance reference compound (PRC) derived sampling rates were determined for polyurethane foam (PUF) passive air samplers in both sub-tropical and temperate locations across Australia. These estimates were on average a factor of 2.7 times higher in summer than winter. The known effects of wind speed and temperature on mass transfer coefficients could not account for this observation. Sampling rates are often derived using ambient temperatures, not the actual temperatures within deployment chambers. If deployment chamber temperatures are in fact higher than ambient temperatures, estimated sampler-air partition coefficients would be greater than actual partition coefficients resulting in an overestimation of PRC derived sampling rates. Sampling rates determined under measured ambient temperatures and estimated deployment chamber temperatures in summer ranged from 7.1 to 10 m{sup 3} day{sup -1} and 2.2-6.8 m{sup 3} day{sup -1} respectively. These results suggest that potential differences between ambient and deployment chamber temperatures should be considered when deriving PRC-based sampling rates. - Internal deployment chamber temperatures rather than ambient temperatures may be required to accurately estimate PRC-based sampling rates.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
徐革锋; 尹家胜; 韩英; 刘洋; 牟振波
2014-01-01
This study examined the effects of water temperature on the metabolic characteristics and aerobic exer-cise capacity of juvenile manchurian trout , Brachymystax lenok ( Pallas) .The resting metabolic rate ( RMR) ,maxi-mum metabolic rate (MMR), metabolic scope(MS)and critical swimming speed (UCrit) of juveniles were measured at different temperature (4, 8, 12, 16, 20℃).The results showed that both the RMR and the MMR increased sig-nificantly with the increasing of water temperature ( P<0 .05 ) .Compared with test group at 4℃, the RMR for 8℃, 12℃, 16℃ and 20℃increased by 62%, 165%, 390%, 411%,respectively, and the MMR increased by 3%, 34%, 111%, 115%, respectively .However , the MS decreased with the increasing of water temperature with the highest MS occurring at 4℃.UCrit was significantly affected by water temperature (P<0.05), but the varia-tions of UCrit didn′t follow certain pattern with temperature .In the test of aerobic exercise , the MMR for each tem-perature level occurred at the swimming speed of 70% UCrit , probably due to the start of anaerobic metabolism , which caused excessive creatine in body , consequently hindered the aerobic metabolism .%为了探究温度对细鳞鲑（ Brachymystax lenok）幼鱼的代谢特征和有氧运动能力的影响，在不同温度（4℃、8℃、12℃、16℃、20℃）下测定了实验鱼的静止代谢率（ RMR）、有氧运动过程中的最大代谢率（ MMR）以及能量代谢范围（MS）和临界游泳速度（UCrit）。结果表明，随着温度的上升，RMR和MMR均显著提高（P＜0．05），各温度下的RMR和MMR分别较4℃条件的提高了62％（8℃）、165％（12℃）、390％（16℃）、411％（20℃）和3％（8℃）、34％（12℃）、111％（16℃）、115％（20℃）；MS随水温的升高呈现下降的趋势，且4℃条件具有最大的代谢范围。不同温度条件下UCrit存在显著性差异，但随着温度升高未表现出明显的变
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in\\verb+~+\\$\\backsl......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in...
Astri J. Lundervold
2017-08-01
Full Text Available Objective: To investigate parent reports of childhood symptoms of inattention as a predictor of adolescent academic achievement, taking into account the impact of the child’s intellectual functioning, in two diagnostically and culturally diverse samples.Method: Samples: (a an all-female sample in the U.S. predominated by youth with ADHD (Berkeley Girls with ADHD Longitudinal Study [BGALS], N = 202, and (b a mixed-sex sample recruited from a Norwegian population-based sample (the Bergen Child Study [BCS], N = 93. Inattention and intellectual function were assessed via the same measures in the two samples; academic achievement scores during and beyond high school and demographic covariates were country-specific.Results: Childhood inattention predicted subsequent academic achievement in both samples, with a somewhat stronger effect in the BGALS sample, which included a large subgroup of children with ADHD. Intellectual function was another strong predictor, but the effect of early inattention remained statistically significant in both samples when intellectual function was covaried.Conclusion: The effect of early indicators of inattention on future academic success was robust across the two samples. These results support the use of remediation procedures broadly applied. Future longitudinal multicenter studies with pre-planned common inclusion criteria should be performed to increase our understanding of the importance of inattention in primary school children for concurrent and prospective functioning.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maddalena, Randy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Parra, Amanda [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Russell, Marion [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Wen-Yee [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2013-05-01
Diffusive or passive sampling methods using commercially filled axial-sampling thermal desorption tubes are widely used for measuring volatile organic compounds (VOCs) in air. The passive sampling method provides a robust, cost effective way to measure air quality with time-averaged concentrations spanning up to a week or more. Sampling rates for VOCs can be calculated using tube geometry and Fick’s Law for ideal diffusion behavior or measured experimentally. There is evidence that uptake rates deviate from ideal and may not be constant over time. Therefore, experimentally measured sampling rates are preferred. In this project, a calibration chamber with a continuous stirred tank reactor design and constant VOC source was combined with active sampling to generate a controlled dynamic calibration environment for passive samplers. The chamber air was augmented with a continuous source of 45 VOCs ranging from pentane to diethyl phthalate representing a variety of chemical classes and physiochemical properties. Both passive and active samples were collected on commercially filled Tenax TA thermal desorption tubes over an 11-day period and used to calculate passive sampling rates. A second experiment was designed to determine the impact of ozone on passive sampling by using the calibration chamber to passively load five terpenes on a set of Tenax tubes and then exposing the tubes to different ozone environments with and without ozone scrubbers attached to the tube inlet. During the sampling rate experiment, the measured diffusive uptake was constant for up to seven days for most of the VOCs tested but deviated from linearity for some of the more volatile compounds between seven and eleven days. In the ozone experiment, both exposed and unexposed tubes showed a similar decline in terpene mass over time indicating back diffusion when uncapped tubes were transferred to a clean environment but there was no indication of significant loss by ozone reaction.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Sung, Young Hee; Hwang, Moon Sook; Lee, Jee Hyang; Park, Hyung Doo; Ryu, Kwang Hyun; Cho, Myung Sook; Yi, Young Hee; Song, S
2012-06-01
This study was done to compare the rates of hemolysis and repeated sampling in blood samples obtained by a syringe needle versus a vacuum tube needle. A randomized, prospective study was used to evaluate the differences between the two blood sampling methods. The study group consisted of patients seen in the emergency department (ED) for blood sampling to determine electrolyte level. ED patients were randomly assigned to either the syringe group or the vacuum tube group. All blood samples were collected by experienced ED nurses and hemolysis was determined by experienced laboratory technologists. Data were analyzed using Fisher's exact test and binary logistic regression. One hundred forty-five valid samples were collected (74 in the syringe group versus 71 in the vacuum tube group). 5 of 74 (6.8%) blood samples in the syringe group and 8 of 71 (11.3%) in the vacuum tube group hemolyzed. Repeated blood sampling occurred for 2 of 74 (2.7%) and 3 of 71 (4.2%) in each group respectively. There were no significant differences in rates of hemolysis and repeated sampling between two groups (B=1.97, p=.204; B=2.36, p=.345). Venipuncture with syringe needles can be recommended for ED nurses to obtain blood samples.
R.C. Clipes
2005-02-01
Full Text Available Avaliaram-se pastagens de capim-elefante e capim-mombaça, por intermédio de amostras de extrusa esofágica e simulação manual de pastejo, estimando-se a composição químico-bromatológica, o fracionamento dos compostos nitrogenados e carboidratos, e a digestibilidade in vitro da matéria seca. Foram utilizados 15 e 13 piquetes de capim-elefante e capim-mombaça, respectivamente, com período de ocupação de três dias. As coletas foram realizadas de forma que se obtivessem amostras relativas ao terceiro, segundo e primeiro dias de ocupação. As metodologias de amostragem foram comparadas dentro de espécie forrageira pelo teste t de Student, com arranjo em pares. Foram observados maiores teores de carboidratos totais, fibra em detergente neutro, fibra em detergente ácido, celulose, lignina e frações de lenta degradação e não degradável dos carboidratos, quando se usou a extrusa esofágica, para ambas as gramíneas. Os teores de carboidratos não-fibrosos foram superiores (PThe methods of esophageal extrusa and hand plucking sample of forage were compared to evaluate elephant grass and mombaça grass pastures, under rotational grazing. The chemical composition, the fractions of nitrogenous and carbohydrates compounds and the in vitro dry matter digestibility were evaluated. For elephant grass and mombaça grass 15 and 13 paddocks were used, respectively, with three days of occupation period and samplings were gotten in the third, second and first days of occupation period. The sampling methodologies were compared within forage species by Student’s t test, in paired arrangement. The contents of total carbohydrates, neutral detergent fiber, acid detergent fiber, cellulose, lignin the slow degradation and undegradable fractions of carbohydrates were higher (P<.05, when esophageal extrusa was used, for both grasses. The non fibrous carbohydrates were higher (P<.05 in hand plucked samples. Higher values (P<.05 were found for
Alixon David Reyes Rodríguez
2011-06-01
theoretical points of reference that responded to scientific needs before, but which are insufficient now. It has been observed in national and international conferences, seminaries, research encounters, in our universities and in different kinds of scientific meetings that some obsolete assumptions are still being taught, which slows down progress in Education Sciences and Sports Science. We recognize that some predictive formulas used to calculate the estimated maximum heart rate (EMHR represented progress for Exercise Science and Exercise Physiology, at some point; however, there are important aspects that should be considered. It is not that we despise them, but we intend to demonstrate and demystify the use of the traditional formula almost as the only calculation and measurement pattern for EMHR and, to offer, from the perspective of other researchers, better possibilities of exercise dosage for certain populations with particular characteristics.
Melymuk, Lisa; Robson, Matthew; Helm, Paul A.; Diamond, Miriam L.
2011-04-01
Polyurethane foam (PUF) passive air samplers (PAS) are a common and highly useful method of sampling persistent organic pollutants (POP) concentrations in air. PAS calibration is necessary to obtain reasonable and comparable semi-quantitative measures of air concentrations. Various methods are found in the literature concerning PAS calibration. 35 studies on PAS use and calibration are examined here, in conjunction with a study involving 10 PAS deployed concurrently in outdoor air with a low-volume air sampler in order to measure the sampling rates of PUF-PAS for polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), polycyclic musks (PCMs), and polycyclic aromatic hydrocarbons (PAHs). Based on this analysis it is recommended that (1) PAS should be assumed to represent bulk rather than gas-phase compound concentrations due to the sampling of particle-bound compounds, (2) calibration of PAS sampling rates is more accurately achieved using an active low-volume air sampler rather than depuration compounds since the former measures gas- and particle-phase compounds and does so continuously over the deployment period of the PAS, and (3) homolog-specific sampling rates based on KOA groupings be used in preference to compound/congener-specific or single sampling rates.
Almond, P. M. [Savannah River Site (SRS), Aiken, SC (United States); Kaplan, D. I. [Savannah River Site (SRS), Aiken, SC (United States); Langton, C. A. [Savannah River Site (SRS), Aiken, SC (United States); Stefanko, D. B. [Savannah River Site (SRS), Aiken, SC (United States); Spencer, W. A. [Savannah River Site (SRS), Aiken, SC (United States); Hatfield, A. [Clemson University, Clemson, SC (United States); Arai, Y. [Clemson University, Clemson, SC (United States)
2012-08-23
The objective of this work was to develop and evaluate a series of methods and validate their capability to measure differences in oxidized versus reduced saltstone. Validated methods were then applied to samples cured under field conditions to simulate Performance Assessment (PA) needs for the Saltstone Disposal Facility (SDF). Four analytical approaches were evaluated using laboratory-cured saltstone samples. These methods were X-ray absorption spectroscopy (XAS), diffuse reflectance spectroscopy (DRS), chemical redox indicators, and thin-section leaching methods. XAS and thin-section leaching methods were validated as viable methods for studying oxidation movement in saltstone. Each method used samples that were spiked with chromium (Cr) as a tracer for oxidation of the saltstone. The two methods were subsequently applied to field-cured samples containing chromium to characterize the oxidation state of chromium as a function of distance from the exposed air/cementitious material surface.
Wiwanitkit Viroj; Waenlor Weerachit
2004-01-01
Toxocara species are most common roundworms of Canidae and Felidae. Human toxocariasis develops by ingesting of embryonated eggs in contaminated soil. There is no previous report of Toxocara contamination in the soil samples from the public areas in Bangkok. For this reason our study have been carried out to examine the frequency of Toxocara eggs in public yards in Bangkok, Thailand. A total of 175 sand and clay samples were collected and examined for parasite eggs. According to this study, T...
Hinkle, Joshua C; Weisburd, David; Famega, Christine; Ready, Justin
2013-01-01
Hot spots policing is one of the most influential police innovations, with a strong body of experimental research showing it to be effective in reducing crime and disorder. However, most studies have been conducted in major cities, and we thus know little about whether it is effective in smaller cities, which account for a majority of police agencies. The lack of experimental studies in smaller cities is likely in part due to challenges designing statistically powerful tests in such contexts. The current article explores the challenges of statistical power and "noise" resulting from low base rates of crime in smaller cities and provides suggestions for future evaluations to overcome these limitations. Data from a randomized experimental evaluation of broken windows policing in hot spots are used to illustrate the challenges that low base rates present for evaluating hot spots policing programs in smaller cities. Analyses show low base rates make it difficult to detect treatment effects. Very large effect sizes would be required to reach sufficient power, and random fluctuations around low base rates make detecting treatment effects difficult, irrespective of power, by masking differences between treatment and control groups. Low base rates present strong challenges to researchers attempting to evaluate hot spots policing in smaller cities. As such, base rates must be taken directly into account when designing experimental evaluations. The article offers suggestions for researchers attempting to expand the examination of hot spots policing and other microplace-based interventions to smaller jurisdictions.
Mayes, Susan D; Calhoun, Susan L; Baweja, Raman; Mahr, Fauzia
2015-06-01
Little is known about psychiatric diagnoses that place children at risk for bullying and victimization. Mothers of 1,707 children 6-18 yr. rated their child as a bully and a victim (not at all, to very often a problem) on the Pediatric Behavior Scale. Children with psychiatric diagnoses were evaluated in an outpatient psychiatry clinic (M age = 9.2 yr., 68.4% male). Control children were community children not on psychotropic medication and with no neurodevelopmental disorder (M age = 8.7 yr., 43.5% male). Children with autism, intellectual disability, and ADHD-Combined type had higher victim and bully maternal ratings than children in the ADHD-Inattentive, depression, anxiety, eating disorder, and control groups. Eating disorder and controls were the only groups in which most children were not rated a victim or a bully. Comorbid oppositional defiant disorder accounted for the higher bully ratings for ADHD-Combined, autism, and intellectual disability. Victimization ratings did not differ between psychiatric groups. Except for eating disorders, victimization ratings were greater in all groups than in control children, suggesting that most psychiatric disorders place children at risk for victimization, as perceived by their mothers.
Ai Ting; Zhang Ru; Liu Jianfeng; Ren Li
2012-01-01
By using MTS815 rock mechanics test system,a series of acoustic emission (AE) location experiments were performed under unloading confining pressure,increasing the axial stress.The AE space-time evolution regularities and energy releasing characteristics during deformation and failure process of coal of different loading rates are compared,the influence mechanism of loading rates on the microscopic crack evolution were studied,combining the AE characteristics and the macroscopic failure modes of the specimens,and the precursory characteristics of coal failure were also analyzed quantitatively.The results indicate that as the loading rate is higher,the AE activity and the main fracture will begin earlier.The destruction of coal body is mainly the function of shear strain at lower loading rate and tension strain at higher rate,and will transform from brittleness to ductility at critical velocities.When the deformation of the coal is mainly plasticity,the amplitude of the AE tinging counting rate increases largely and the AE energy curves appear an obvious "step",which can be defined as the first failure precursor point.Statics of AE information shows that the strongest AE activity begins when the axial stress level was 92-98％,which can be defined as the other failure precursor point.As the loading rate is smaller,the coal more easily reaches the latter precursor point after the first one,so attention should be aroused to prevent dynamic disaster in coal mining when the AE activity reaches the first precursor point.
Ai; Ting; Zhang; Ru; Liu; Jianfeng; Ren; Li
2012-01-01
By using MTS815 rock mechanics test system,a series of acoustic emission(AE) location experiments were performed under unloading confining pressure,increasing the axial stress.The AE space-time evolution regularities and energy releasing characteristics during deformation and failure process of coal of different loading rates are compared,the influence mechanism of loading rates on the microscopic crack evolution were studied,combining the AE characteristics and the macroscopic failure modes of the specimens,and the precursory characteristics of coal failure were also analyzed quantitatively.The results indicate that as the loading rate is higher,the AE activity and the main fracture will begin earlier.The destruction of coal body is mainly the function of shear strain at lower loading rate and tension strain at higher rate,and will transform from brittleness to ductility at critical velocities.When the deformation of the coal is mainly plasticity,the amplitude of the AE ringing counting rate increases largely and the AE energy curves appear an obvious ''step'',which can be defined as the first failure precursor point.Statics of AE information shows that the strongest AE activity begins when the axial stress level was 92-98%,which can be defined as the other failure precursor point.As the loading rate is smaller,the coal more easily reaches the latter precursor point after the first one,so attention should be aroused to prevent dynamic disaster in coal mining when the AE activity reaches the first precursor point.
Stammel, Nadine; Abbing, Eva M; Heeke, Carina; Knaevelsrud, Christine
2015-01-01
The World Health Organization recently proposed significant changes to the posttraumatic stress disorder (PTSD) diagnostic criteria in the 11th edition of the International Classification of Diseases (ICD-11). The present study investigated the impact of these changes in two different post-conflict samples. Prevalence and rates of concurrent depression and anxiety, socio-demographic characteristics, and indicators of clinical severity according to ICD-11 in 1,075 Cambodian and 453 Colombian civilians exposed to civil war and genocide were compared to those according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV). Results indicated significantly lower prevalence rates under the ICD-11 proposal (8.1% Cambodian sample and 44.4% Colombian sample) compared to the DSM-IV (11.2% Cambodian sample and 55.0% Colombian sample). Participants meeting a PTSD diagnosis only under the ICD-11 proposal had significantly lower rates of concurrent depression and a lower concurrent total score (depression and anxiety) compared to participants meeting only DSM-IV diagnostic criteria. There were no significant differences in socio-demographic characteristics and indicators of clinical severity between these two groups. The lower prevalence of PTSD according to the ICD-11 proposal in our samples of persons exposed to a high number of traumatic events may counter criticism of previous PTSD classifications to overuse the PTSD diagnosis in populations exposed to extreme stressors. Also another goal, to better distinguish PTSD from comorbid disorders could be supported with our data.
Nagy, T.; van Lien, R.; Willemsen, G.; Proctor, G.; Effting, M.; Fülöp, M.; Bárdos, G.; Veerman, E.C.I.; Bosch, J.A.
2015-01-01
Salivary alpha-amylase (sAA) is used as a sympathetic (SNS) stress marker, though its release is likely co-determined by SNS and parasympathetic (PNS) activation. The SNS and PNS show asynchronous changes during acute stressors, and sAA responses may thus vary with sample timing. Thirty-four
Xu, Yemin; Grobelny, Pawel; von Allmen, Alexander; Knudson, Korben; Pikal, Michael; Carpenter, John F.; Randolph, Theodore W.
2014-01-01
rhGH was lyophilized with various glass-forming stabilizers, employing cycles that incorporated various freezing and annealing procedures to manipulate glass formation kinetics, associated relaxation processes and glass specific surface areas (SSA’s). The secondary structure in the cake was monitored by IR and in reconstituted samples by CD. The rhGH concentrations on the surface of lyophilized powders were determined from ESCA. Tg, SSA’s and water contents were determined immediately after lyophilization. Lyophilized samples were incubated at 323 K for 16 weeks, and the resulting extents of rhGH aggregation, oxidation and deamidation were determined after rehydration. Water contents and Tg were independent of lyophilization process parameters. Compared to samples lyophilized after rapid freezing, rhGH in samples that had been annealed in frozen solids prior to drying, or annealed in glassy solids after secondary drying retained more native-like protein secondary structure, had a smaller fraction of the protein on the surface of the cake and exhibited lower levels of degradation during incubation. A simple kinetic model suggested that the differences in the extent of rhGH degradation during storage in the dried state between different formulations and processing methods could largely be ascribed to the associated levels of rhGH at the solid-air interface after lyophilization. PMID:24623139
Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.
2015-01-01
With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing...
Agreement Rates between Parent and Self-Report on Past ADHD Symptoms in an Adult Clinical Sample
Dias, Gabriela; Mattos, Paulo; Coutinho, Gabriel; Segenreich, Daniel; Saboya, Eloisa; Ayrao, Vanessa
2008-01-01
Objective: To investigate agreement rates between parent and self-report on childhood symptoms of ADHD. Method: Sixty-eight self-referred treatment-naive adults (33 men, 35 women) were interviewed with a modified version of the Kiddie Schedule for Affective Disorders and Schizophrenia-Epidemiological Version (K-SADS-E) and asked about past ADHD…
Ali, Huda Juma'a; Zangana, Jwan M. Sabir
2016-01-01
Background and Objectives: Episiotomy is a surgical incision done during the last stages of labor and delivery to expand the opening of the vagina to prevent tearing of the perineum during the delivery of the baby. The objectives of this study are to estimate episiotomy and perineal injury rate, indication for episiotomy and their association with…
Agreement Rates between Parent and Self-Report on Past ADHD Symptoms in an Adult Clinical Sample
Dias, Gabriela; Mattos, Paulo; Coutinho, Gabriel; Segenreich, Daniel; Saboya, Eloisa; Ayrao, Vanessa
2008-01-01
Objective: To investigate agreement rates between parent and self-report on childhood symptoms of ADHD. Method: Sixty-eight self-referred treatment-naive adults (33 men, 35 women) were interviewed with a modified version of the Kiddie Schedule for Affective Disorders and Schizophrenia-Epidemiological Version (K-SADS-E) and asked about past ADHD…
Ana Andreea CIOCA
2017-05-01
Full Text Available The replacement of conventional sample preparation techniques with newer techniques which are automated, faster and more eco-friendly, is nowadays desired in every analytical laboratory. One of the techniques with the attributes mentioned above is the Accelerated Solvent Extraction. In order to evaluate how successful this method is for the extraction of fat and organochlorine pesticides (OCPs from dried fish meat samples, we have tested two series of diverse fish using Dionexâ„¢ 350 ASE provided by Thermo Scientificâ„¢ (Germany. For a more interesting approach, we added to our investigation 7 polychlorinated biphenyl (PCBs, 3 thricholorobenzenes, 2 tetrachlorobenzenes, 1 pentachlorobenzenes and 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD. The study focused on comparing the recoveries of these analytes from different fish samples, after replacing the conventional reference method of the laboratory with ASE. The ASE parameters tested were previously used for the extraction of fat and polybrominated diphenyl ethers (PBDE from fish samples: temperature: 120 Â° C; static time: 5 min; number of cycles: 3; flushing volume: 25%; rinse with nitrogen: 90 s; solvent: cyclohexane/ethyl acetate (ratio 1:1. The ASE method provided similar and in some cases better results when compared to the standard reference method, more rapidly, eco-friendly and safer. Any high or low recoveries of the analytes taken into study are attributed to random or systematic errors during the Clean-up step of the extracts and the quantification with Gas Chromatography coupled with Tandem Mass-Spectrometry (GC MS/MS.
1993-02-01
typically penetrated from 1 to 5 mm into the sediment column. The sediment penetration of oxygen agrees well with data reported by Wester - lund (1989...solution and minimize oxidation of sample during the measurement. 3. Rinse sulfide and reference electrodes into waste container and blot dry with absorbent...standardization techniques. 1. Preparation a. Wash crystals of Na2S-9H20 with deionized water and blot dry. b. Weigh approximately 12 g of Na2S*9H20 and
Gosálvez, M. A.; Pal, Prem; Ferrando, N.; Hida, H.; Sato, K.
2011-12-01
This is part I of a series of two papers dedicated to the presentation of a novel, large throughput, experimental procedure to determine the three-dimensional distribution of the etch rate of silicon in a wide range of anisotropic etchants, including a total of 30 different etching conditions in KOH, KOH+IPA, TMAH and TMAH+Triton solutions at various concentrations and temperatures. The method is based on the use of previously reported, vertically micromachined wagon wheels (WWs) (Wind and Hines 2000 Surf. Sci. 460 21-38 Nguyen and Elwenspoek 2007 J. Electrochem. Soc. 154 D684-91), focusing on speeding up the etch rate extraction process for each WW by combining macrophotography and image processing procedures. The proposed procedure positions the WWs as a realistic alternative to the traditional hemispherical specimen. The obtained, extensive etch rate database is used to perform wet etching simulations of advanced systems, showing good agreement with the experimental counterparts. In part II of this series (Gosálvez et al J. Micromech. Microeng. 21 125008), we provide a theoretical analysis of the etched spoke shapes, a detailed comparison to the etch rates from previous studies and a self-consistency study of the measured etch rates against maximum theoretical values derived from the spoke shape analysis.
Supernovae in the Subaru Deep Field: An Initial Sample, and Type Ia Rate, out to Redshift 1.6
Poznanski, Dovi; Yasuda, Naoki; Foley, Ryan J; Doi, Mamoru; Filippenko, Alexei V; Fukugita, Masataka; Gal-Yam, Avishay; Jannuzi, Buell T; Morokuma, Tomoki; Oda, Takeshi; Schweiker, Heidi; Sharon, Keren; Silverman, Jeffrey M; Totani, Tomonori
2007-01-01
Large samples of high-redshift supernovae (SNe) are potentially powerful probes of cosmic star formation, metal enrichment, and SN physics. We present initial results from a new deep SN survey, based on re-imaging in the R, i', z' bands, of the 0.25 deg2 Subaru Deep Field (SDF), with the 8.2-m Subaru telescope and Suprime-Cam. In a single new epoch consisting of two nights of observations, we have discovered 33 SNe, down to a z'-band magnitude of 26.3 (AB). We have measured the photometric redshifts of the SN host galaxies, obtained Keck spectroscopic redshifts for 17 of the host galaxies, and classified the SNe using the Bayesian photometric algorithm of Poznanski et al. (2007) that relies on template matching. After correcting for biases in the classification, 55% of our sample consists of Type Ia supernovae and 45% of core-collapse SNe. The redshift distribution of the SNe Ia reaches z ~ 1.6, with a median of z ~ 1.2. The core-collapse SNe reach z ~ 1.0, with a median of z ~ 0.5. Our SN sample is comparabl...
Guesmi, Latifa; Menif, Mourad
2016-08-01
In the context of carrying a wide variety of modulation formats and data rates for home networks, the study covers the radio-over-fiber (RoF) technology, where the need for an alternative way of management, automated fault diagnosis, and formats identification is expressed. Also, RoF signals in an optical link are impaired by various linear and nonlinear effects including chromatic dispersion, polarization mode dispersion, amplified spontaneous emission noise, and so on. Hence, for this purpose, we investigated the sampling method based on asynchronous delay-tap sampling in conjunction with a cross-correlation function for the joint bit rate/modulation format identification and optical performance monitoring. Three modulation formats with different data rates are used to demonstrate the validity of this technique, where the identification accuracy and the monitoring ranges reached high values.
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
孙艳春; 徐聪; 何香涛
2003-01-01
We present a new sample of 37 close major-merger galaxy pairs, selected from the 2-degree field redshift survey of the two-micron all-sky survey (2MASS) galaxies. The selection criteria for our near-infrared pairs are more closely related to galaxy mass (a very important parameter in galaxy evolution models) than those for optical selected samples. Our sample benefits enormously from the high homogeneity and accuracy of the 2MASS database, and false matchings are minimized by the essentially three-dimensional selection procedure. Taking into account the biases, we find that 1.96 (±0.4)% of galaxies are in close major-merger pairs. This indicates a local merging rate of 1.0%, in good agreement with the results in recent studies of optical selected pairs in the local universe. The results derived with our sample have high confidence.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Hawkins, K A; Tulsky, D S
2001-11-01
Since memory performance expectations may be IQ-based, unidirectional base rate data for IQ-Memory Score discrepancies are provided in the WAIS-III/WMS-III Technical Manual. The utility of these data partially rests on the assumption that discrepancy base rates do not vary across ability levels. FSIQ stratified base rate data generated from the standardization sample, however, demonstrate substantial variability across the IQ spectrum. A superiority of memory score over FSIQ is typical at lower IQ levels, whereas the converse is true at higher IQ levels. These data indicate that the use of IQ-memory score unstratified "simple difference" tables could lead to erroneous conclusions for clients with low or high IQ. IQ stratified standardization base rate data are provided as a complement to the "predicted difference" method detailed in the Technical Manual.
Nadine Stammel
2015-05-01
Full Text Available Background: The World Health Organization recently proposed significant changes to the posttraumatic stress disorder (PTSD diagnostic criteria in the 11th edition of the International Classification of Diseases (ICD-11. Objective: The present study investigated the impact of these changes in two different post-conflict samples. Method: Prevalence and rates of concurrent depression and anxiety, socio-demographic characteristics, and indicators of clinical severity according to ICD-11 in 1,075 Cambodian and 453 Colombian civilians exposed to civil war and genocide were compared to those according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV. Results: Results indicated significantly lower prevalence rates under the ICD-11 proposal (8.1% Cambodian sample and 44.4% Colombian sample compared to the DSM-IV (11.2% Cambodian sample and 55.0% Colombian sample. Participants meeting a PTSD diagnosis only under the ICD-11 proposal had significantly lower rates of concurrent depression and a lower concurrent total score (depression and anxiety compared to participants meeting only DSM-IV diagnostic criteria. There were no significant differences in socio-demographic characteristics and indicators of clinical severity between these two groups. Conclusions: The lower prevalence of PTSD according to the ICD-11 proposal in our samples of persons exposed to a high number of traumatic events may counter criticism of previous PTSD classifications to overuse the PTSD diagnosis in populations exposed to extreme stressors. Also another goal, to better distinguish PTSD from comorbid disorders could be supported with our data.
Mapes, A A; Kloosterman, A D; Poot, C J de; van Marion, V
2016-07-01
Mobile Rapid-DNA devices have recently become available on the market. These devices can perform DNA analyses within 90min with an easy 'sample in-answer out' system, with the option of performing comparisons with a DNA database or reference profile. However, these fast mobile systems cannot yet compete with the sensitivity of the standard laboratory analysis. For the future this implies that Scene of Crime Officers (SoCOs) need to decide on whether to analyse a crime sample with a Rapid-DNA device and to get results within 2h or to secure and analyse the sample at the laboratory with a much longer throughput time but with higher sensitivity. This study provides SoCOs with evidence-based information on DNA success rates, which can improve their decisions at the crime scene on whether or not to use a Rapid-DNA device. Crime samples with a high success rate in the laboratory will also have the highest potential for Rapid-DNA analysis. These include samples from e.g. headwear, cigarette ends, articles of clothing, bloodstains, and drinking items.
Altman, S.J.; Tidwell, V.C. [Sandia National Laboratories, Albuquerque, NM (United States); Uchida, M. [Japan Nuclear Cycle Development Inst., Ibaraki (Japan)
2001-08-01
Matrix diffusion is one of the most important contaminant migration retardation processes in crystalline rocks. Performance assessment calculations in various countries assume that only the area of the fracture surface where advection is active provides access to the rock matrix. However, accessibility to the matrix could be significantly enhanced with diffusion into stagnant zones, fracture fillings, and through an alteration rim in the matrix. Laboratory visualization experiments were conducted on granodiorite samples to investigate and quantify diffusion rates within different zones of a Cretaceous granodiorite. Samples were collected from the Kamaishi experimental site in the northern part of the main island of Japan. Diffusion of iodine out of the sample is visualized and rates are measured using x-ray absorption imaging. X-ray images allow for measurements of relative iodine concentration and relative iodine mass as a function of time and two-dimensional space at a sub-millimeter spatial resolution. In addition, two-dimensional heterogeneous porosity fields (at the same resolution as the relative concentration fields) are measured. This imaging technique allows for a greater understanding of the spatial variability of diffusion rates than can be accomplished with standard bulk measurements. It was found that diffusion rates were fastest in partially gouge-filled fractures. Diffusion rates in the recrystallized calcite-based fracture-filling material were up to an order of magnitude lower than in gouge-filled fractures. Diffusion in altered matrix around the fractures was over an order of magnitude lower than that in the gouge-filled fractures. Healed fractures did not appear to have different diffusion rates than the unaltered matrix.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Susanne Schaal
2011-06-01
Full Text Available During the Rwandan genocide of 1994, nearly one million people were killed within a period of 3 months.The objectives of this study were to investigate the levels of trauma exposure and the rates of mental health disorders and to describe risk factors of posttraumatic stress reactions in Rwandan widows and orphans who had been exposed to the genocide.Trained local psychologists interviewed orphans (n=206 and widows (n=194. We used the PSS-I to assess posttraumatic stress disorder (PTSD, the Hopkins Symptom Checklist to assess depression and anxiety symptoms, and the M.I.N.I. to assess risk of suicidality.Subjects reported having been exposed to a high number of different types of traumatic events with a mean of 11 for both groups. Widows displayed more severe mental health problems than orphans: 41% of the widows (compared to 29% of the orphans met symptom criteria for PTSD and a substantial proportion of widows suffered from clinically significant depression (48% versus 34% and anxiety symptoms (59% versus 42% even 13 years after the genocide. Over one-third of respondents of both groups were classified as suicidal (38% versus 39%. Regression analysis indicated that PTSD severity was predicted mainly by cumulative exposure to traumatic stressors and by poor physical health status. In contrast, the importance given to religious/spiritual beliefs and economic variables did not correlate with symptoms of PTSD.While a significant portion of widows and orphans continues to display severe posttraumatic stress reactions, widows seem to constitute a particularly vulnerable survivor group. Our results point to the chronicity of mental health problems in this population and show that PTSD may endure over time if not addressed by clinical intervention. Possible implications of poor mental health and the need for psychological intervention are discussed.
张戈
2015-01-01
We studies the issue raised by Reference[3],according to appropriate assumptions and other smooth conditions,With a more simple method,Proved that asymptotic existence of quasi likelihood equations in Quasi-likelihood nonlinear model ,and proved the convergence rate of the solution.%在适当假定及其它一些光滑条件下,用更为简便的方法证明了拟似然非线性模型的拟似然方程解的渐近存在性,并且求出了该解收敛于真值的速度.
Stammel, Nadine; Abbing, Eva M.; Heeke, Carina; Knaevelsrud, Christine
2015-01-01
Background: The World Health Organization recently proposed significant changes to the posttraumatic stress disorder (PTSD) diagnostic criteria in the 11th edition of the International Classification of Diseases (ICD-11).Objective: The present study investigated the impact of these changes in two different post-conflict samples.Method: Prevalence and rates of concurrent depression and anxiety, socio-demographic characteristics, and indicators of clinical severity according to ICD-11 in 1,075 ...
Stammel, Nadine; Abbing, Eva M.; Heeke, Carina; Knaevelsrud, Christine
2015-01-01
Background: The World Health Organization recently proposed significant changes to the posttraumatic stress disorder (PTSD) diagnostic criteria in the 11th edition of the International Classification of Diseases (ICD-11).Objective: The present study investigated the impact of these changes in two different post-conflict samples.Method: Prevalence and rates of concurrent depression and anxiety, socio-demographic characteristics, and indicators of clinical severity according to ICD-11 in 1,075 ...
Glassmire, David M; Jhawar, Amandeep; Burchett, Danielle; Tarescavage, Anthony M
2017-05-01
The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) F(p) (Infrequency-Psychopathology) scale was developed to measure overreporting in a manner that was minimally confounded by genuine psychopathology, which was a problem with using the MMPI-2 F (Infrequency) scale among patients with severe mental illness. Although revised versions of both of these scales are included on the MMPI-2-Restructured Form and used in a forensic context, no item-level research has been conducted on their sensitivity to genuine psychopathology among forensic psychiatric inpatients. Therefore, we examined the psychometric properties of the scales in a sample of 438 criminally committed forensic psychiatric inpatients who were adjudicated as not guilty by reason of insanity and had no known incentive to overreport. We found that 20 of the 21 Fp-r items (95.2%) demonstrated endorsement rates ≤ 20%, with 14 of the items (66.7%) endorsed by less than 10% of the sample. Similar findings were observed across genders and across patients with mood and psychotic disorders. The one item endorsed by more than 20% of the sample had a 23.7% overall endorsement rate and significantly different endorsement rates across ethnic groups, with the highest endorsements occurring among Hispanic/Latino (43.3% endorsement rate) patients. Endorsement rates of F-r items were generally higher than for Fp-r items. At the scale level, we also examined correlations with the Restructured Clinical Scales and found that Fp-r demonstrated lower correlations than F-r, indicating that Fp-r is less associated with a broad range of psychopathology. Finally, we found that Fp-r demonstrated slightly higher specificity values than F-r at all T score cutoffs. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
史海芳; 李树有; 姬永刚
2008-01-01
For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Michaud, Jean-Philippe; Moreau, Gaétan
2013-07-01
Experimental protocols in forensic entomology successional field studies generally involve daily sampling of insects to document temporal changes in species composition on animal carcasses. One challenge with that method has been to adjust the sampling intensity to obtain the best representation of the community present without affecting the said community. To this date, little is known about how such investigator perturbations affect decomposition-related processes. Here, we investigated how different levels of daily sampling of fly eggs and fly larvae affected, over time, carcass decomposition rate and the carrion insect community. Results indicated that a daily sampling of carrion insects, and caused an increase in the volume of eggs laid by dipterans. This study suggests that the carrion insect community not only has a limited resilience to recurrent perturbations but that a daily sampling intensity equal to or <5% of the egg and larvae volumes appears adequate to ensure that the system is representative of unsampled conditions. Hence we propose that this threshold be accepted as best practice in future forensic entomology successional field studies.
Rowe, Rachel E; Townend, John; Brocklehurst, Peter; Knight, Marian; Macfarlane, Alison; McCourt, Christine; Newburn, Mary; Redshaw, Maggie; Sandall, Jane; Silverton, Louise; Hollowell, Jennifer
2014-05-29
To explore whether service configuration and obstetric unit (OU) characteristics explain variation in OU intervention rates in 'low-risk' women. Ecological study using funnel plots to explore unit-level variations in adjusted intervention rates and simple linear regression, stratified by parity, to investigate possible associations between unit characteristics/configuration and adjusted intervention rates in planned OU births. Characteristics considered: OU size, presence of an alongside midwifery unit (AMU), proportion of births in the National Health Service (NHS) trust planned in midwifery units or at home and midwifery 'under' staffing. 36 OUs in England. 'Low-risk' women with a 'term' pregnancy planning vaginal birth in a stratified, random sample of 36 OUs. Adjusted rates of intrapartum caesarean section, instrumental delivery and two composite measures capturing birth without intervention ('straightforward' and 'normal' birth). Funnel plots showed unexplained variation in adjusted intervention rates. In NHS trusts where proportionately more non-OU births were planned, adjusted intrapartum caesarean section rates in the planned OU births were significantly higher (nulliparous: R(2)=31.8%, coefficient=0.31, p=0.02; multiparous: R(2)=43.2%, coefficient=0.23, p=0.01), and for multiparous women, rates of 'straightforward' (R(2)=26.3%, coefficient=-0.22, p=0.01) and 'normal' birth (R(2)=17.5%, coefficient=0.24, p=0.01) were lower. The size of the OU (number of births), midwifery 'under' staffing levels (the proportion of shifts where there were more women than midwives) and the presence of an AMU were associated with significant variation in some interventions. Trusts with greater provision of non-OU intrapartum care may have higher intervention rates in planned 'low-risk' OU births, but at a trust level this is likely to be more than offset by lower intervention rates in planned non-OU births. Further research using high quality data on unit characteristics and
Gervais, V.
2004-11-01
The subject of this report is the study and simulation of a model describing the infill of sedimentary basins on large scales in time and space. It simulates the evolution through time of the sediment layer in terms of geometry and rock properties. A parabolic equation is coupled to an hyperbolic equation by an input boundary condition at the top of the basin. The model also considers a unilaterality constraint on the erosion rate. In the first part of the report, the mathematical model is described and particular solutions are defined. The second part deals with the definition of numerical schemes and the simulation of the model. In the first chap-ter, finite volume numerical schemes are defined and studied. The Newton algorithm adapted to the unilateral constraint used to solve the schemes is given, followed by numerical results in terms of performance and accuracy. In the second chapter, a preconditioning strategy to solve the linear system by an iterative solver at each Newton iteration is defined, and numerical results are given. In the last part, a simplified model is considered in which a variable is decoupled from the other unknowns and satisfies a parabolic equation. A weak formulation is defined for the remaining coupled equations, for which the existence of a unique solution is obtained. The proof uses the convergence of a numerical scheme. (author)
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Ellis, Robert J; Zhu, Bilei; Koenig, Julian; Thayer, Julian F; Wang, Ye
2015-09-01
As the literature on heart rate variability (HRV) continues to burgeon, so too do the challenges faced with comparing results across studies conducted under different recording conditions and analysis options. Two important methodological considerations are (1) what sampling frequency (SF) to use when digitizing the electrocardiogram (ECG), and (2) whether to interpolate an ECG to enhance the accuracy of R-peak detection. Although specific recommendations have been offered on both points, the evidence used to support them can be seen to possess a number of methodological limitations. The present study takes a new and careful look at how SF influences 24 widely used time- and frequency-domain measures of HRV through the use of a Monte Carlo-based analysis of false positive rates (FPRs) associated with two-sample tests on independent sets of healthy subjects. HRV values from the first sample were calculated at 1000 Hz, and HRV values from the second sample were calculated at progressively lower SFs (and either with or without R-peak interpolation). When R-peak interpolation was applied prior to HRV calculation, FPRs for all HRV measures remained very close to 0.05 (i.e. the theoretically expected value), even when the second sample had an SF well below 100 Hz. Without R-peak interpolation, all HRV measures held their expected FPR down to 125 Hz (and far lower, in the case of some measures). These results provide concrete insights into the statistical validity of comparing datasets obtained at (potentially) very different SFs; comparisons which are particularly relevant for the domains of meta-analysis and mobile health.
Vrieze, Scott I; Grove, William M
2008-06-01
The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in
Haack, Lauren M; Kapke, Theresa L; Gerdes, Alyson C
2016-07-01
The Latino youth population is rapidly growing and expected to comprise nearly 40% of the total youth population by 2060. Unfortunate disparities exist in the United States (U.S.), such that young Latinos are less likely than non-Hispanic Whites to receive and benefit from mental health services. In order to identify and prioritize specific areas of mental health outreach, the current study examined preliminary rates, associations, and predictors of child psychopathology in a convenience sample of Latino youth. 123 Spanish and English speaking Latino parents of school-aged children completed a series of questionnaires regarding child and family functioning. Latino youth in the current sample demonstrated comparable rates of psychopathology to non-referred, normative samples. Parental acculturation (particularly Separated parental acculturation status: high orientation to Latino culture and low orientation to U.S. mainstream culture) was associated with an increased prevalence of clinically significant psychopathology across several domains, and socioeconomic status was associated with an increased prevalence of thought problems. Additionally, Separated parental acculturation status significantly predicted the prevalence of clinically significant anxious/depressed problems, such that youth of parents displaying Separated acculturation status were significantly more represented in the clinically-elevated groups than the functional groups. These preliminary results suggest that prioritizing outreach to Latino youth of parents maintaining orientation to Latino culture but not U.S. mainstream culture may be necessary in order to begin addressing existing mental health disparities in the U.S.
Li, Weidong; Chornock, Ryan; Filippenko, Alexei V; Poznanski, Dovi; Ganeshalingam, Mohan; Wang, Xiaofeng; Modjaz, Maryam; Jha, Saurabh; Foley, Ryan J; Smith, Nathan
2010-01-01
This is the second paper of a series in which we present new measurements of the observed rates of supernovae (SNe) in the local Universe, determined from the Lick Observatory Supernova Search (LOSS). In this paper, a complete SN sample is constructed, and the observed (uncorrected for host-galaxy extinction) luminosity functions (LFs) of SNe are derived. These LFs solve two issues that have plagued previous rate calculations for nearby SNe: the luminosity distribution of SNe and the host-galaxy extinction. We select a volume-limited sample of 175 SNe, collect photometry for every object, and fit a family of light curves to constrain the peak magnitudes and light-curve shapes. The volume-limited LFs show that they are not well represented by a Gaussian distribution. There are notable differences in the LFs for galaxies of different Hubble types (especially for SNe Ia). We derive the observed fractions for the different subclasses in a complete SN sample, and find significant fractions of SNe II-L (10%), IIb (...
Calculation of Maximum Waste Heat and Recovery Rate of Liquid and Gas Fuels%液气燃料烟气的最大余热量与节能率计算研究
丛永杰
2016-01-01
The consumption of various liqui d oil and gas fuel grows rapidly in Chinese energy structure. The discharged flue's temperature is generally 160℃ ~180℃ after these fuels are combusted. This part of energy can be used as secondary energy, though whose grade is low. A lot of H elements are in the form of liquid and gas fuels, and the vapor is the flue's main ingredi-ents. In this paper, the waste heat quantity and recovery rate of 0# light diesel oil and natural gas's flue is calculated, whose tem-perature is from 180℃ to 25℃ at the condition of 1 atm. In the 0# light diesel's flue, the residual heat's proportion of the vapor's heat is about 55. 08%. In the natural gas's flue, which proportion is about 79. 41%. Moreover, the vapor's latent heat is about 3/4. Therefore, recovering the latent heat of vapor is of great significance for the heat recovery of the low temperature waste heat.%在中国能源结构中,燃油与天然气所占比例迅速上升.燃烧后排烟温度一般为160℃~180℃,仍含有较多能量,可以二次利用.本文通过对液、气体燃料中具有代表性的0号轻质柴油及天然气烟气的余热量与节能率进行计算,发现低温烟气余热中的水蒸气余热量占有很大比例,柴油烟气为55.08%,天热气烟气为79.41%.回收烟气余热,尤其是其中水蒸汽的潜热对低温烟气的热回收具有重要意义.若有效回收利用,既是对一次能源的二次利用,更符合"十三五"期间国家节能减排的相关政策要求.
Morrison, Shane A; Belden, Jason B
2016-09-30
Performance reference compounds (PRCs) can be spiked into passive samplers prior to deployment. If the dissipation kinetics of PRCs from the sampler corresponds to analyte accumulation kinetics, then PRCs can be used to estimate in-situ sampling rates, which may vary depending on environmental conditions. Under controlled laboratory conditions, the effectiveness of PRC corrections on prediction accuracy of water concentrations were evaluated using nylon organic chemical integrative samplers (NOCIS). Results from PRC calibrations suggest that PRC elimination occurs faster under higher flow conditions; however, minimal differences were observed for PRC elimination between fast flow (9.3cm/s) and slow flow (5.0cm/s) conditions. Moreover, minimal differences were observed for PRC elimination from Dowex Optipore L-493; therefore, PRC corrections did not improve results for NOCIS configurations containing Dowex Optipore L-493. Regardless, results suggest that PRC corrections were beneficial for NOCIS configurations containing Oasis HLB; however, due to differences in flow dependencies of analyte sampling rates and PRC elimination rates across the investigated flow regimes, the use of multiple PRC corrections was necessary. As such, a "Best-Fit PRC" approach was utilized for Oasis HLB corrections using caffeine-(13)C3, DIA-d5, or no correction based on the relative flow dependencies of analytes and these PRCs. Although PRC corrections reduced the variability when in-situ conditions differed from laboratory calibrations (e.g. static versus moderate flow), applying PRC corrections under similar flow conditions increases variability in estimated values.
Nunes, J M; Buhler, S; Roessli, D; Sanchez-Mazas, A
2014-05-01
In this review, we present for the first time an integrated version of the Gene[rate] computer tools which have been developed during the last 5 years to analyse human leukocyte antigen (HLA) data in human populations, as well as the results of their application to a large dataset of 145 HLA-typed population samples from Europe and its two neighbouring areas, North Africa and West Asia, now forming part of the Gene[va] database. All these computer tools and genetic data are, from now, publicly available through a newly designed bioinformatics platform, HLA-net, here presented as a main achievement of the HLA-NET scientific programme. The Gene[rate] pipeline offers user-friendly computer tools to estimate allele and haplotype frequencies, to test Hardy-Weinberg equilibrium (HWE), selective neutrality and linkage disequilibrium, to recode HLA data, to convert file formats, to display population frequencies of chosen alleles and haplotypes in selected geographic regions, and to perform genetic comparisons among chosen sets of population samples, including new data provided by the user. Both numerical and graphical outputs are generated, the latter being highly explicit and of publication quality. All these analyses can be performed on the pipeline after scrupulous validation of the population sample's characterisation and HLA typing reporting according to HLA-NET recommendations. The Gene[va] database offers direct access to the HLA-A, -B, -C, -DQA1, -DQB1, -DRB1 and -DPB1 frequencies and summary statistics of 145 population samples having successfully passed these HLA-NET 'filters', and representing three European subregions (South-East, North-East and Central-West Europe) and two neighbouring areas (North Africa, as far as Sudan, and West Asia, as far as South India). The analysis of these data, summarized in this review, shows a substantial genetic variation at the regional level in this continental area. These results have main implications for population genetics
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Haggerty, Greg; Siefert, Caleb; Stoycheva, Valentina; Sinclair, Samuel Justin; Baity, Matthew; Zodan, Jennifer; Mehra, Ashwin; Chand, Vijay; Blais, Mark A
2014-01-01
Growing economic pressure on inpatient services for adolescents has resulted in fewer clinicians to provide individual psychotherapy. As a result, inpatient treatment trends have favored group psychotherapy modalities and psychopharmacological interventions. Currently, no clinician-rated measures exist to assist clinicians in determining who would be able to better utilize individual psychotherapy on inpatient units. The current study sought to demonstrate the utility of the Readiness for Inpatient Psychotherapy Scale with an adolescent inpatient sample. This study also used the RIPS as it is intended to be used in everyday practice. Results from the authors' analyses reveal that the RIPS demonstrates good psychometrics and interrater reliability, as well as construct validity.
Huckins, J.N.; Petty, J.D.; Orazio, C.E.; Lebo, J.A.; Clark, R.C.; Gibson, V.L.; Gala, W.R.; Echols, K.R.
1999-01-01
The use of lipid-containing semipermeable membrane devices (SPMDs) is becoming commonplace, but very little sampling rate data are available for the estimation of ambient contaminant concentrations from analyte levels in exposed SPMDs. We determined the aqueous sampling rates (R(s)s; expressed as effective volumes of water extracted daily) of the standard (commercially available design) 1-g triolein SPMD for 15 of the priority pollutant (PP) polycyclic aromatic hydrocarbons (PAHs) at multiple temperatures and concentrations. Under the experimental conditions of this study, recovery- corrected R(s) values for PP PAHs ranged from ???1.0 to 8.0 L/d. These values would be expected to be influenced by significant changes (relative to this study) in water temperature, degree of biofouling, and current velocity- turbulence. Included in this paper is a discussion of the effects of temperature and octanol-water partition coefficient (K(ow)); the impacts of biofouling and hydrodynamics are reported separately. Overall, SPMDs responded proportionally to aqueous PAH concentrations; i.e., SPMD R(s) values and SPMD-water concentration factors were independent of aqueous concentrations. Temperature effects (10, 18, and 26 ??C) on Rs values appeared to be complex but were relatively small.The use of lipid-containing semipermeable membrane devices (SPMDs) is becoming commonplace, but very little sampling rate data are available for the estimation of ambient contaminant concentrations from analyte levels in exposed SPMDs. We determined the aqueous sampling rates (Rss; expressed as effective volumes of water extracted daily) of the standard (commercially available design) 1-g triolein SPMD for 15 of the priority pollutant (PP) polycyclic aromatic hydrocarbons (PAHs) at multiple temperatures and concentrations. Under the experimental conditions of this study, recovery-corrected Rs values for PP PAHs ranged from ???1.0 to 8.0 L/d. These values would be expected to be influenced by
Movahed, Mohammad Reza; Ramaraj, Radhakrishnan; Hashemzadeh, Mehrnoosh; Jamal, M Mazen; Hashemzadeh, Mehrtash
2009-07-01
Advances in the management of atherosclerosis risk factors have been dramatic in the previous 10 years. The goal of this study was to evaluate any decrease in age-adjusted incidence of acute ST-elevation myocardial infarction (STEMI) in a very large database of inpatient admissions from 1988 to 2004. The Nationwide Inpatient Sample database was used to calculate the age-adjusted rate for STEMI from 1988 to 2004 retrospectively. Specific International Classification of Diseases, Ninth Revision, codes for MIs consistent with STEMI were used. Patient demographic data were also analyzed and adjusted for age. The Nationwide Inpatient Sample database contained 1,352,574 patients >40 years of age who had a diagnosis of STEMI from 1988 to 2004. Mean age for these patients was 66.06 +/- 13.69 years. Men had almost 2 times the age-adjusted STEMI rate as women (men 62.4%, women 37.6%). From 1988 the age-adjusted rate for all acute STEMIs remained steady for 8 years (108.3 per 100,000, 95% confidence interval [CI] 99.0 to 117.5, in 1988 and 102.5 per 100,000, 95% CI 94.7 to 110.4, in 1996). However, from 1996 onward, the age-adjusted incidence of STEMI steadily decreased to 1/2 the incidence of the previous 8 years (50.0 per 100.000, 95% CI 46.5 to 53.5, by 2004, p 1988 to 1996, with a steady linear decrease to 1/2 by 2004. The cause of the steady decrease in STEMI rate most likely reflects the advancement in management of patients with atherosclerosis.
Hardiman, Nigel; Dietz, Kristina Charlotte; Bride, Ian; Passfield, Louis
2017-01-01
Land managers of natural areas are under pressure to balance demands for increased recreation access with protection of the natural resource. Unintended dispersal of seeds by visitors to natural areas has high potential for weedy plant invasions, with initial seed attachment an important step in the dispersal process. Although walking and mountain biking are popular nature-based recreation activities, there are few studies quantifying propensity for seed attachment and transport rate on boot soles and none for bike tires. Attachment and transport rate can potentially be affected by a wide range of factors for which field testing can be time-consuming and expensive. We pilot tested a sampling methodology for measuring seed attachment and transport rate in a soil matrix carried on boot soles and bike tires traversing a known quantity and density of a seed analog (beads) over different distances and soil conditions. We found % attachment rate on boot soles was much lower overall than previously reported, but that boot soles had a higher propensity for seed attachment than bike tires in almost all conditions. We believe our methodology offers a cost-effective option for researchers seeking to manipulate and test effects of different influencing factors on these two dispersal vectors.
赵庆乐; 吴怀春; 李海燕; 张世红
2011-01-01
equals to half of a precession cycle is the optimal sampling interval for cyclostratigraphic analysis. All Milankovitch signals can be identified and at the same time the workload is the least by using this optimal sampling interval This interval should be determined according to the mean accumulation rate of the target successions during field sampling.
Marulli, Federico; Veropalumbo, Alfonso; Moscardini, Lauro; Cimatti, Andrea; Dolag, Klaus
2017-03-01
Aims: Redshift-space clustering anisotropies caused by cosmic peculiar velocities provide a powerful probe to test the gravity theory on large scales. However, to extract unbiased physical constraints, the clustering pattern has to be modelled accurately, taking into account the effects of non-linear dynamics at small scales, and properly describing the link between the selected cosmic tracers and the underlying dark matter field. Methods: We used a large hydrodynamic simulation to investigate how the systematic error on the linear growth rate, f, caused by model uncertainties, depends on sample selections and co-moving scales. Specifically, we measured the redshift-space two-point correlation function of mock samples of galaxies, galaxy clusters and active galactic nuclei, extracted from the Magneticum simulation, in the redshift range 0.2 ≤ z ≤ 2, and adopting different sample selections. We estimated fσ8 by modelling both the monopole and the full two-dimensional anisotropic clustering, using the dispersion model. Results: We find that the systematic error on fσ8 depends significantly on the range of scales considered for the fit. If the latter is kept fixed, the error depends on both redshift and sample selection due to the scale-dependent impact of non-linearities if not properly modelled. Concurrently, we show that it is possible to achieve almost unbiased constraints on fσ8 provided that the analysis is restricted to a proper range of scales that depends non-trivially on the properties of the sample. This can have a strong impact on multiple tracer analyses, and when combining catalogues selected at different redshifts.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
44 CFR 208.12 - Maximum Pay Rate Table.
2010-10-01
... reimbursement and Backfill, for the System Member's actual compensation or the actual compensation of the... OF HOMELAND SECURITY DISASTER ASSISTANCE NATIONAL URBAN SEARCH AND RESCUE RESPONSE SYSTEM General...
Entrainment and maximum vapour flow rate of trays
Van Sinderen, AH; Wijn, EF; Zanting, RWJ
This is a report on free entrainment measurements in a small (0.20 m x 0.20 in) air-water column. An adjustable weir controlled the liquid height on a test tray. Several sieve and valve trays were studied. The results were interpreted with a two- or three-layer model of the two-phase mixture on the
Nitric-glycolic flowsheet testing for maximum hydrogen generation rate
Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The Defense Waste Processing Facility (DWPF) at the Savannah River Site is developing for implementation a flowsheet with a new reductant to replace formic acid. Glycolic acid has been tested over the past several years and found to effectively replace the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the chemical generation of hydrogen and ammonia, allows purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allows for effective adjustment of the SRAT/SME rheology, and is favorable with respect to melter flammability. The objective of this work was to perform DWPF Chemical Process Cell (CPC) testing at conditions that would bound the catalytic hydrogen production for the nitric-glycolic flowsheet.
MAXIMUM PRODUCTION OF TRANSMISSION MESSAGES RATE FOR SERVICE DISCOVERY PROTOCOLS
Intisar Al-Mejibli
2011-12-01
Full Text Available Minimizing the number of dropped User Datagram Protocol (UDP messages in a network is regarded asa challenge by researchers. This issue represents serious problems for many protocols particularly thosethat depend on sending messages as part of their strategy, such us service discovery protocols.This paper proposes and evaluates an algorithm to predict the minimum period of time required betweentwo or more consecutive messages and suggests the minimum queue sizes for the routers, to manage thetraffic and minimise the number of dropped messages that has been caused by either congestion or queueoverflow or both together. The algorithm has been applied to the Universal Plug and Play (UPnPprotocol using ns2 simulator. It was tested when the routers were connected in two configurations; as acentralized and de centralized. The message length and bandwidth of the links among the routers weretaken in the consideration. The result shows Better improvement in number of dropped messages `amongthe routers.
veteran athletes exercise at higher maximum heart rates than are ...
questionnaire, a full medical examination and a routine. sECG. Thereafter ... activities than during stress testing in the laboratory. (P < 0.01). ... After the risks and procedures involved ..... for the first time in either rehabilitation or sporting activities. .... set were i. Results. E. 25 - 29.9), underwei increased. ;;, 24-year- pressure,.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Labate, L.; Andreassi, M. G.; Baffigi, F.; Bizzarri, R.; Borghini, A.; Bussolino, G. C.; Fulgentini, L.; Ghetti, F.; Giulietti, A.; Köster, P.; Lamia, D.; Levato, T.; Oishi, Y.; Pulignani, S.; Russo, G.; Sgarbossa, A.; Gizzi, L. A.
2016-07-01
We present a laser-driven source of electron bunches with average energy 260~\\text{keV} and picosecond duration, which has been setup for radiobiological tests covering the previously untested sub-MeV energy range. Each bunch combines high charge with short duration and sub-millimeter range into a record instantaneous dose rate, as high as {{10}9}~\\text{Gy}~{{\\text{s}}-1} . The source can be operated at 10~\\text{Hz} and its average dose rate is 35~\\text{mGy}~{{\\text{s}}-1} . Both the high instantaneous dose rate and high level of relative biological effectiveness, attached to sub-MeV electrons, make this source very attractive for studies of ultrafast radiobiology on thin cell samples. The source reliability, in terms of shot-to-shot stability of features such as mean energy, bunch charge and transverse beam profile, is discussed, along with a dosimetric characterization. Finally, a few preliminary biological tests performed with this source are presented.
Nisar Ahmad
2014-01-01
Full Text Available Radon concentration, exhalation rate, radium activity and annual effective dose have been measured from baked and unbaked bricks and cement samples commonly used as construction material in the dwellings of Dera Ismail Khan City, Pakistan. CR-39 based NRPB radon dosimeters and RAD7 have been used as passive and active devises. The values of radon concentration for baked, unbaked bricks and cements obtained from passive and active techniques were found in good agreement. Average values of radon exhalation rates in baked, unbaked bricks and cement were found (1.202±0.212 Bq m^{-2} h^{-1}, (1.419±0.230 Bq m^{-2} h^{-1} and (0.386±0.117 Bq m^{-2} h^{-1} and their corresponding average radium activity and annual effective dose were found (0.956±0.169 Bq/kg, (1.13±0.184 Bq/kg, (0.323±0.098 Bq/kg and (33.96±5.99 µSv y^{-1}, (40.3±6.51 µSv y^{-1} and (10.94±3.28 µSv y^{-1}, respectively. Radon concentration, exhalation rate and their corresponding radium activity and annual effective dose were found higher in unbaked bricks as compared to baked bricks and cement but overall values of radon exhalation rate, annual effective dose and radium activity were found well below the world average values of 57.600 Bq m^{-2} h^{-1}, 1100 µSv y^{-1} and 370 Bq/kg, respectively.
Christina Darviri
2012-03-01
Full Text Available Self-rated health (SRH is a health measure related to future health, mortality, healthcare services utilization and quality of life. Various sociodemographic, health and lifestyle determinants of SRH have been identified in different populations. The aim of this study is to extend SRH literature in the Greek population. This is a cross-sectional study conducted in rural communities between 2001 and 2003. Interviews eliciting basic demographic, health-related and lifestyle information (smoking, physical activity, diet, quality of sleep and religiosity were conducted. The sample consisted of 1,519 participants, representative of the rural population of Tripoli. Multinomial regression analysis was conducted to identify putative SRH determinants. Among the 1,519 participants, 489 (32.2%, 790 (52% and 237 (15.6% rated their health as “very good”, “good” and “poor” respectively. Female gender, older age, lower level of education and impaired health were all associated with worse SRH, accounting for 16.6% of SRH variance. Regular exercise, healthier diet, better sleep quality and better adherence to religious habits were related with better health ratings, after adjusting for sociodemographic and health-related factors. BMI and smoking did not reach significance while exercise and physical activity exhibited significant correlations but not consistently across SRH categories. Our results support previous findings indicating that people following a more proactive lifestyle pattern tend to rate their health better. The role of stress-related neuroendocrinologic mechanisms on SRH and health in general is also discussed.
Holm, Malin; Pettersen, Frank Olav; Kvale, Dag
2008-01-01
The immunopathogenic factor programmed cell death 1 (PD-1) was compared to CD38 and HIV RNA in predicting actual CD4+ T cell loss rate indicative for clinical progression. This cross sectional exploratory study included 50 consecutive, healthy HIV-infected patients off antiretroviral therapy (ART); 43 had the required observation times > 12 months. PD-1 and CD38 were determined on various T cell subsets by FACS analyses in fresh and later in parallel cryopreserved samples. Here more rapid progressors were relatively defined by having CD4 loss rates < median at -45.7/microl/year. PD-1 and CD38 densities in fresh blood were lower (p<0.001) in patients on ART (n=14) and seronegative controls (n=8). CD4 loss rates correlated significantly to current HIV RNA (R=-0.30), CD38 (R=-0.33) and PD-1 densities (R=-0.38) on CD8+ T cells, and best to DeltaCD38, i.e. the difference in CD38 between the PD-1+CD8+ and CD8+ subsets (R=-0.51). PD-1 was highest on the CD27+CD28-CD8+ subset with best correlation to progression (R=-0.54) in rapid progressors. Logistic regression models from HIV RNA, CD38 and PD-1 predicting rapid progression included PD-1 as best independent variable in combination with DeltaCD38 or CD38, supported by similar results from multiple regression analyses. PD-1 did not correlate with any of the other candidate variables. Cryopreservation reduced the CD38+ and PD-1+ fractions but corresponding densities became more suppressed through a non-linear loss most pronounced in CD38hi/PD-1hi cells with loss of predictive power. In conclusion, PD-1 was the best independent predictor for CD4 loss rates in fresh blood compared with CD38 and HIV RNA.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
Sun, Jindong; Feng, Zhaozhong; Leakey, Andrew D B; Zhu, Xinguang; Bernacchi, Carl J; Ort, Donald R
2014-09-01
The responses of CO2 assimilation to [CO2] (A/Ci) were investigated at two developmental stages (R5 and R6) and in several soybean cultivars grown under two levels of CO2, the ambient level of 370 μbar versus the elevated level of 550 μbar. The A/Ci data were analyzed and compared by either the combined iterations or the separated iterations of the Rubisco-limited photosynthesis (Ac) and/or the RuBP-limited photosynthesis (Aj) using various curve-fitting methods: the linear 2-segment model; the non-rectangular hyperbola model; the rectangular hyperbola model; the constant rate of electron transport (J) method and the variable J method. Inconsistency was found among the various methods for the estimation of the maximum rate of carboxylation (Vcmax), the mitochondrial respiration rate in the light (Rd) and mesophyll conductance (gm). The analysis showed that the inconsistency was due to inconsistent estimates of gm values that decreased with an instantaneous increase in [CO2], and varied with the transition Ci cut-off between Rubisco-limited photosynthesis and RuBP-regeneration-limited photosynthesis, and due to over-parameters for non-linear curve-fitting with gm included. We proposed an alternate solution to A/Ci curve-fitting for estimates of Vcmax, Rd, Jmax and gm with the various A/Ci curve-fitting methods. The study indicated that down-regulation of photosynthetic capacity by elevated [CO2] and leaf aging was due to partially the decrease in the maximum rates of carboxylation and partially the decrease in gm. Mesophyll conductance lowered photosynthetic capacity by 18% on average for the case of soybean plants.
Ignacio Jesús Chirosa Ríos
2011-05-01
Full Text Available
Resumen
En el presente estudio se propone una ecuación logarítmica para el cálculo de la frecuencia cardiaca máxima (FC máx de forma indirecta en jugadores de deportes de equipo en situaciones integradas de juego. La muestra experimental estuvo formada por trece jugadores (24± 3 años pertenecientes a un equipo de División de Honor B de balonmano. Se midió la FC máx inicialmente por medio de la prueba de Course Navette. Posteriormente, se realizaron veintiuna sesiones de entrenamiento en las que se registró la FC, de forma continua, y la percepción subjetiva del esfuerzo (RPE, en cada tarea. Se realizó un análisis de regresión lineal que permitió encontrar una ecuación de predicción de la FC máx. a partir de las frecuencias cardiacas máximas de las tres sesiones de mayor intensidad. Los datos previstos por esta ecuación correlacionan significativamente con los datos obtenidos en el Course Navette y tienen menor error típico de medida que otros métodos de cálculo. Como conclusión principal se destaca que esta ecuación posibilita una manera útil y cómoda del cálculo de FC máx en situaciones reales de juego, evitándose la realización de test analíticos no específicos y, de este modo, reducir la falta de ecología en la valoración funcional.
Palabras clave: control del entrenamiento, valoración funcional, fórmula predictiva
Abstract
This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Martin, William Lee Berdel; Machado, Angélica Homobono
2005-07-01
Although footedness is closely associated with handedness, accurate prevalence rates of contralateral footedness in right- and left-handed populations were previously unavailable to researchers studying the relationship between phenotypic and hemispheric asymmetries. We collected preference data from 2081 Brazilian children and adolescents, and relate the prevalence of crossed hand/foot preferences to values reported elsewhere in the literature. In our samples, about 4% of the dextrals and 33% of the sinistrals exhibited a contralateral kicking preference. This is in close agreement with the weighted means from our analysis of 19 papers in the literature, which yields 4.0% left-footed kicking in dextrals and 33.5% right-footed kicking in sinistrals. These values are in marked contrast to the 50% figure for right-footed kicking in sinistrals as given by MacNeilage and colleagues (1988, 1991). Among Brazilians with mixed handedness, there was a substantial increase in incongruent footedness. Male consistent right- and left-handers showed a higher prevalence of cross-footed preferences in their kicking preference than females. The sex difference in dextrals was attributed to a training effect in soccer-related activities, and to a sampling bias in sinistrals.
Tahir Mahmood, Uzma; O'Gorman, Catherine; Marchocki, Zibi; O'Brien, Yvonne; Murphy, Deirdre J
2017-05-19
To evaluate the performance of fetal scalp stimulation (FSS) compared to fetal blood sampling (FBS) as a second line test of fetal wellbeing in labor. A prospective cohort study was conducted including 298 fetal blood sampling procedures performed due to abnormal fetal cardiotocography (CTG). Two independent observers interpreted the CTG following stimulation. The FSS test was classified as normal when an elicited acceleration and/or provoked fetal heart rate variability was recorded. The FBS was classified as normal (pH ≥7.25), borderline (pH 7.21-7.24), and abnormal (pH ≤7.20). Of the 298 procedures, 249 (84%) had a normal scalp pH result, 199 (67%) had an acceleration in response to FSS and 255 (86%) had an acceleration or normal variability in response to FSS. All 11 of the neonates classified as normal by FSS, but abnormal by FBS were born with normal Apgar scores and cord pH results. The consistency between FSS and FBS was "fair" (kappa 0.28) while the consistency between either test and cord arterial pH was "poor". This study suggests that FSS has the potential to be a reliable alternative to FBS. The findings require evaluation in a well-designed randomized controlled trial.
Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sydor, Michael A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kaiser, Brooke L.D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-06-01
Surface sampling for Bacillus anthracis spores has traditionally relied on detection via bacterial cultivation methods. Although effective, this approach does not provide the level of organism specificity that can be gained through molecular techniques. False negative rates (FNR) and limits of detection (LOD) were determined for two B. anthracis surrogates with modified rapid viability-polymerase chain reaction (mRV-PCR) following macrofoam-swab sampling. This study was conducted in parallel with a previously reported study that analyzed spores using a plate-culture method. B. anthracis Sterne (BAS) or B. atrophaeus Nakamura (BG) spores were deposited onto four surface materials (glass, stainless steel, vinyl tile, and plastic) at nine target concentrations (2 to 500 spores/coupon; 0.078 to 19.375 colony-forming units [CFU] per cm2). Mean FNR values for mRV-PCR analysis ranged from 0 to 0.917 for BAS and 0 to 0.875 for BG and increased as spore concentration decreased (over the concentrations investigated) for each surface material. FNRs based on mRV-PCR data were not statistically different for BAS and BG, but were significantly lower for glass than for vinyl tile. FNRs also tended to be lower for the mRV-PCR method compared to the culture method. The mRV-PCR LOD95 was lowest for glass (0.429 CFU/cm2 with BAS and 0.341 CFU/cm2 with BG) and highest for vinyl tile (0.919 CFU/cm2 with BAS and 0.917 CFU/cm2 with BG). These mRV-PCR LOD95 values were lower than the culture values (BAS: 0.678 to 1.023 CFU/cm2 and BG: 0.820 to 1.489 CFU/cm2). The FNR and LOD95 values reported in this work provide guidance for environmental sampling of Bacillus spores at low concentrations.
Esins, Svenja; Müller, Jörg Michael; Romer, Georg; Wagner, Katharina; Achtergarde, Sandra
2017-03-01
Clinical Validation of the Caregiver-Child Socioemotional and Relationship Rating Scale (SIRS) for Child Behavior in a Preschool-Age Sample The description of child behavior in mother-child-interaction is important in early detection and treatment of psychiatric disorders in preschool children. The Caregiver-Child Socioemotional and Relationship Rating Scale (SIRS) may serve this diagnostic purpose. We aim to examine interrater-reliability of SIRS and concurrent, convergent, and discriminant validity to maternal behavior by Play-PAB, and a measure of mother-child-relationsship by Parent-Infant-Global-Assessment-Scale (PIRGAS). Five raters assessed 47 ten-minute video sequences of parent-child-interaction recorded at the Family Day Hospital for Preschool Children with SIRS, Play-PAB, and PIRGAS. We report psychometric properties of SIRS, and present the association with Play-PAB and PIRGAS. SIRS shows a satisfying interrater-reliability for all items. Positive child behavior e. g. the SIRS' "child responsiveness" shows negative correlation to Play-PAB-scales' parental "hostility" and "intrusiveness", but independence of parental "involvement", "positive emotionality", and "discipline". Child and parental behavior show expected associations with the global relationship measure PIRGAS. The assessment of child behavior in parent-child-interaction with SIRS can be quickly learned and reliably applied without extensive training. SIRS shows meaningful relations to parental behavior and a clinical global measure of the caregiver-child-relationship. We recommend SIRS for clinical diagnostics to describe child behavior in mother-child-interaction.
Ybarra, Michele L; Espelage, Dorothy L; Langhinrichsen-Rohling, Jennifer; Korchmaros, Josephine D; Boyd, Danah
2016-07-01
National, epidemiological data that provide lifetime rates of psychological, physical, and sexual adolescent data abuse (ADA) perpetration and victimization within the same sample of youth are lacking. To address this gap, data from 1058 randomly selected U.S. youth, 14-21 years old, surveyed online in 2011 and/or 2012, were weighted to be nationally representative and analyzed. In addition to reporting prevalence rates, we also examined the overlap of the six types of ADA queried. Results suggested that ADA was commonly reported by both male and female youth. Half (51 %) of female youth and 43 % of male youth reported victimization of at least one of the three types of ADA. Half (50 %) of female youth and 35 % of male youth reported at least one type of ADA perpetration. More male youth reported sexual ADA perpetration than female youth. More female youth reported perpetration of psychological and physical ADA and more reported psychological victimization than male youth. Rates were similar across race and ethnicity, but increased with age. This increase may have been because older youth spent longer time in relationships than younger youth, or perhaps because older youth were developmentally more likely than younger youth to be in abusive relationships. Many youth reported being both perpetrators and victims and/or involved in multiple forms of ADA across their dating history. Together, these findings suggested that interventions should acknowledge that youth may play multiple roles in abusive dyads. Understanding the overlap among ADA within the same as well as across multiple relationships will be invaluable to future interventions aiming to disrupt and prevent ADA.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Continuous maximum flow segmentation method for nanoparticle interaction analysis.
Marak, L; Tankyevych, O; Talbot, H
2011-10-01
In recent years, tomographic three-dimensional reconstruction approaches using electrons rather than X-rays have become popular. Such images produced with a transmission electron microscope make it possible to image nanometre-scale materials in three-dimensional. However, they are also noisy, limited in contrast and most often have a very poor resolution along the axis of the electron beam. The analysis of images stemming from such modalities, whether fully or semiautomated, is therefore more complicated. In particular, segmentation of objects is difficult. In this paper, we propose to use the continuous maximum flow segmentation method based on a globally optimal minimal surface model. The use of this fully automated segmentation and filtering procedure is illustrated on two different nanoparticle samples and provide comparisons with other classical segmentation methods. The main objectives are the measurement of the attraction rate of polystyrene beads to silica nanoparticle (for the first sample) and interaction of silica nanoparticles with large unilamellar liposomes (for the second sample). We also illustrate how precise measurements such as contact angles can be performed.
Verriele, Marie; Allam, Nadine; Depelchin, Laurence; Le Coq, Laurence; Locoge, Nadine
2015-11-01
Passive sampling technology has been extensively used for long-term VOC atmospheric concentrations' monitoring. Its performances regarding the short-term measurements and related to VOC from biogas were evaluated in this work: laboratory scale experiments have been conducted in order to check the suitability of Radiello® diffusive samplers for the assessment of 8 h-VOC levels in highly changeable meteorological conditions; in a second step a short pilot field campaign was implemented in the vicinity of a West-French landfill. First of all, it was assessed that amongst a diversified list of 16 characteristic compounds from biogas, mercaptans, some halogenated, oxygenated compounds and terpenes could not be measured accurately by this passive technique either because they are not captured by the sorbent or they are not quantitatively desorbed in the chosen mediated analytical conditions. Moreover, it has been confirmed that sampling rates (SR) related to isopentane, THF, cyclohexane, toluene, p-xylene and n-decane are influenced by environmental factors: the main influence concerns the wind speed. From 2 m s(-1), when the velocity increases by 1 m s(-1), the SR increases from 12 to 32% depending on the COV (considering a linear dependence between 2 and 7 m s(-1)). Humidity has no effect on SR, and temperature influence is rather limited to less than 3% per degree. A comprehensive uncertainty estimation, including uncertainties linked to meteorological changes, has led to global relative uncertainties comprising between 18% and 54% from one VOC to another: a quite high value comparatively to those obtained without considering meteorological condition influences. To illustrate our results, targeted VOC were quantified in the field, on a single day: concentrations range between LD to 3 µg m(-3): relatively very low concentrations compared to those usually reported by literature.
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Deatherage Kaiser, Brooke L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sydor, Michael A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barrett, Christopher A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-03-31
The performance of a macrofoam-swab sampling method was evaluated using Bacillus anthracis Sterne (BAS) and Bacillus atrophaeus Nakamura (BG) spores applied at nine low target amounts (2-500 spores) to positive-control plates and test coupons (2 in. × 2 in.) of four surface materials (glass, stainless steel, vinyl tile, and plastic). Test results from cultured samples were used to evaluate the effects of surrogate, surface concentration, and surface material on recovery efficiency (RE), false negative rate (FNR), and limit of detection. For RE, surrogate and surface material had statistically significant effects, but concentration did not. Mean REs were the lowest for vinyl tile (50.8% with BAS, 40.2% with BG) and the highest for glass (92.8% with BAS, 71.4% with BG). FNR values ranged from 0 to 0.833 for BAS and 0 to 0.806 for BG, with values increasing as concentration decreased in the range tested (0.078 to 19.375 CFU/cm^{2}, where CFU denotes ‘colony forming units’). Surface material also had a statistically significant effect. A FNR-concentration curve was fit for each combination of surrogate and surface material. For both surrogates, the FNR curves tended to be the lowest for glass and highest for vinyl title. The FNR curves for BG tended to be higher than for BAS at lower concentrations, especially for glass. Results using a modified Rapid Viability-Polymerase Chain Reaction (mRV-PCR) analysis method were also obtained. The mRV-PCR results and comparisons to the culture results will be discussed in a subsequent report.
Carns, Bhavini; Fadare, Oluwole
2008-01-01
Studies evaluating the routine Papanicolaou (Pap) test have traditionally used as the reference gold standard, the diagnoses on the follow-up histologic samples. Since the latter are typically obtained days to weeks after the Pap test, the accuracy of the resultant comparison may be affected by interim factors, such as regression of human papillomavirus, new lesion acquisitions or colposcopy-associated variability. A subset of our clinicians have routinely obtained cervical cytology samples immediately prior to their colposcopic procedures, which presented a unique opportunity to re-evaluate the test performance of liquid-based cervical cytology in detecting the most clinically significant lesions (i.e. cervical intraepithelial neoplasia 2 or worse: CIN2+), using as gold standard, diagnoses on cervical biopsies that were essentially obtained simultaneously. For each patient, cytohistologic non-correlation between the Pap test and biopsy was considered to be present when either modality displayed a high-grade squamous intraepithelial lesion (HGSIL)/CIN2+ while the other displayed a less severe lesion. Therefore, HGSIL/CIN2+ was present in both the Pap test and biopsy in true positives, and absent in both modalities in true negatives. In false positives, the Pap test showed HGSIL while the biopsy showed less than a CIN2+. In false negatives, Pap tests displaying less than a HGSIL were associated with biopsies displaying CIN2+. Combinations associated with "atypical" interpretations were excluded. A cytohistologic non-correlation was present in 17 (4.8%) of the 356 combinations reviewed. The non-correlation was attributed, by virtue of having the less severe interpretation, to the Pap test in all 17 cases. There were 17, 322, 0, and 17 true positives, true negatives, false positives and false negatives respectively. The sensitivity, specificity, positive predictive value and negative predictive value of the Pap test, at a diagnostic threshold of HGSIL, in identifying
Sipeky, Csilla; Matyas, Petra; Melegh, Marton; Janicsek, Ingrid; Szalai, Renata; Szabo, Istvan; Varnai, Reka; Tarlos, Greta; Ganczer, Alma; Melegh, Bela
2014-09-01
The purpose of this work was to characterise the W24X mutation of the GJB2 gene in order to provide more representative and geographicaly relevant carrier rates of healthy Roma subisolates and the Hungarian population. 493 Roma and 498 Hungarian healthy subjects were genotyped for the GJB2 c.71G>A (rs104894396, W24X) mutation by PCR-RFLP assay and direct sequencing. This is the first report on GJB2 W24X mutation in geographically subisolated Roma population of Hungary compared to local Hungarians. Comparing the genotype and allele frequencies of GJB2 rs104894396 mutation, significant difference was found in GG (98.4 vs. 99.8 %), GA (1.62 vs. 0.20 %) genotypes and A (0.8 vs. 0.1 %) allele between the Roma and Hungarian populations, respectively (p Roma and Hungarian samples carried the GJB2 W24X AA genotype. Considerable result of our study, that the proportion of GJB2 W24X GA heterozygotes and the A allele frequency was eight times higher in Roma than in Hungarians. Considering the results, the mutant allele frequency both in Roma (0.8 %) and in Hungarian (0.1 %) populations is lower than expected from previous results, likely reflecting local differentiated subisolates of these populations and a suspected lower risk for GJB2 mutation related deafness. However, the significant difference in GJB2 W24X carrier rates between the Roma and Hungarians may initiate individual diagnostic investigations and effective public health interventions.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sydor, Michael A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Deatherage Kaiser, Brooke L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
Surface sampling for Bacillus anthracis spores has traditionally relied on detection via bacterial cultivation methods. Although effective, this approach does not provide the level of organism specificity that can be gained through molecular techniques. False negative rates (FNR) and limits of detection (LOD) were determined for two B. anthracis surrogates with modified rapid viability-polymerase chain reaction (mRV-PCR) following macrofoam-swab sampling. This study was conducted in parallel with a previously reported study that analyzed spores using a plate-culture method. B. anthracis Sterne (BAS) or B. atrophaeus Nakamura (BG) spores were deposited onto four surface materials (glass, stainless steel, vinyl tile, and plastic) at nine target concentrations (2 to 500 spores/coupon; 0.078 to 19.375 colony-forming units [CFU] per cm²). Mean FNR values for mRV-PCR analysis ranged from 0 to 0.917 for BAS and 0 to 0.875 for BG and increased as spore concentration decreased (over the concentrations investigated) for each surface material. FNRs based on mRV-PCR data were not statistically different for BAS and BG, but were significantly lower for glass than for vinyl tile. FNRs also tended to be lower for the mRV-PCR method compared to the culture method. The mRV-PCR LOD₉₅ was lowest for glass (0.429 CFU/cm² with BAS and 0.341 CFU/cm² with BG) and highest for vinyl tile (0.919 CFU/cm² with BAS and 0.917 CFU/cm² with BG). These mRV-PCR LOD₉₅ values were lower than the culture values (BAS: 0.678 to 1.023 CFU/cm² and BG: 0.820 to 1.489 CFU/cm²). The FNR and LOD₉₅ values reported in this work provide guidance for environmental sampling of Bacillus spores at low concentrations.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Investigation of measures for improving the clinical test sample qualified rate%提高临床检验标本合格率的措施
刘会珍
2014-01-01
目的：探讨临床检验标本的不合格类型及原因，以提高检验标本的合格率。方法：2010年11月-2011年11月接收临床检验标本5038份，对不合格标本的分类与原因进行回顾性分析。结果：5038份标本的不合格率10.26%，不合格率最高为痰液标本(18.95%)，最低为脑脊液标本(0)。不合格原因有送检不及时、受污染、药物影响、标本量少、治疗同侧采集、容器错误及抗凝不完全等。结论：对临床标本进行检验时，为提高结果准确性，需要从标本采集及检验全过程进行全面控制与管理。%Objective:To explore the types and causes of unqualified specimen in order to improve the qualified rate of test specimens.Methods:5 038 clinical specimens were received from November 2010 to November 2011.We retrospectively analyzed the classification and reason of unqualified specimen.Results:In 5038 samples,the unqualified rate was 10.26%.The unqualified rate was the highest in sputum specimens(18.95%).The lowest was cerebrospinal fluid specimens(0).There were many reasons for failure,including the submission not timely,contaminated,drug effects,specimen less,the treatment of ipsilateral collection, container error and anticoagulation not completely and so on.Conclusion:In order to improve the accuracy of results,When we detect the clinical specimens, we need to adopt a comprehensive control and management of the whole process from the specimen collection and test.
Slick, D J; Hinkin, C H; van Gorp, W G; Satz, P
2001-01-01
HIV-1 infected persons who are pursuing disability benefits are increasingly seeking neuropsychological assessment for purposes of corroborating functional impairment. Thus, research on the utility of measures of symptom validity among these patients is needed. Recently, Mittenberg, Azrin, Millsaps, and Heilbronner (1993) proposed a malingering index score for the WechslerMemoryScale-Revised that is derived by subtracting the Attention/Concentration Index (ACI) score from the General Memory Index Score (GMI). This study is a cross-validation of the specificity of the GMI-ACI Malingering Index in a sample of 55 non-compensation-seeking HIV-positive (HIV+) patients. An overall false-positive rate of 7% was observed for the GMI-ACI Malingering Index. However, further analyses showed that GMI-ACI Malingering Index scores were correlated with GMI scores such that false-positive errors were substantially higher (18%) among patients who obtained above-average GMI scores. These findings suggest a cautious approach to application of the GMI-ACI Malingering Index, particularly among patients who obtain above-average GMI scores.
P. Stefanelli
2008-06-01
Full Text Available We analyse a microgravity data set acquired from two spring LaCoste & Romberg gravity meters operated in parallel at the same site on Etna volcano (Italy for about two months (August September 2005. The high sampling rate acquisition (2Hz allowed the correlation of short-lasting gravity fluctuations with seismic events. After characterizing the oscillation behavior of the meters, through the study of spectral content and the background noise level of both sequences, we recognized fluctuations in the gravity data, spanning a range of periods from 1 second to about 30 seconds dominated by components with a period of about 15 ÷ 25 seconds, during time intervals encompassing both local seismic events and large worldwide earthquakes. The data analyses demonstrate that observed earthquake-induced gravity fluctuations have some differences due to diverse spectral content of the earthquakes. When local seismic events which present high frequency content excite the meters, the correlation between the two gravity signals is poor (factor < 0.3. Vice versa, when large worldwide earthquakes occur and low frequency seismic waves dominate the ensuing seismic wavefield, the resonance frequencies of the meters are excited and they react according to more common features. In the latter case, the signals from the two instruments are strongly correlated to each other (up to 0.9. In this paper the behaviors of spring gravimeters in the frequency range of the disturbances produced by local and large worldwide earthquakes are presented and discussed.
Development of signal processing at multi-sampling rates%多采样率信号处理的发展
万伟程; 李艳华; 周三文
2014-01-01
数字通信系统中，为适应传输、降低资源消耗、适于处理操作，常需要变换信号的采样率。多采样率信号处理理论从语音信号处理中发展起来，在应用中不断丰富。随着软件无线电的应用，多采样率变换在数字信号领域占据越来越重要的地位。多采样率信号处理技术与小波分析、分数阶傅里叶变换等其他信号处理技术相结合是未来发展的方向。%The sampling rate of signal often needs to be changed for fitting transmission,reducing resource consumption and suiting process handling in digital communication system. Multirate signal processing theory arose from speech signal pro-cessing and was enriched in application. Multirate signal processing plays an important role in digital signal processing with the application of software radio. It′s a tendency of combining the multirate signal processing with wavelet analysis and fractional Fourier transform.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
赵拥军; 赵勇胜; 赵闯
2016-01-01
This paper investigates the joint estimation of Time Difference Of Arrival (TDOA) and Frequency Difference Of Arrival (FDOA) in passive location system, where the true value of the reference signal is unknown. A novel Maximum Likelihood (ML) estimator of TDOA and FDOA is constructed, and Markov Chain Monte Carlo (MCMC) method is applied to finding the global maximum of likelihood function by generating the realizations of TDOA and FDOA. Unlike the Cross Ambiguity Function (CAF) algorithm or the Expectation Maximization (EM) algorithm, the proposed algorithm can also estimate the TDOA and FDOA of non-integer multiple of the sampling interval and has no dependence on the initial estimate. The Cramer Rao Lower Bound (CRLB) is also derived. Simulation results show that, the proposed algorithm outperforms the CAF and EM algorithm for different SNR conditions with higher accuracy and lower computational complexity.%该文针对无源定位中参考信号真实值未知的时差-频差联合估计问题,构建了一种新的时差-频差最大似然估计模型,并采用马尔科夫链蒙特卡洛(MCMC)方法求解似然函数的全局极大值,得到时差-频差联合估计.算法通过生成时差-频差样本,并统计样本均值得到估计值,克服了传统互模糊函数(CAF)算法只能得到时域和频域采样间隔整数倍估计值的问题,且不存在期望最大化(EM)等迭代算法的初值依赖和收敛问题.推导了时差-频差联合估计的克拉美罗界,并通过仿真实验表明,算法在不同信噪比条件下的估计精度优于CAF算法和EM算法,且计算复杂度较低.
Indoor Ultra-Wide Band Network Adjustment using Maximum Likelihood Estimation
Koppanyi, Z.; Toth, C. K.
2014-11-01
This study is the part of our ongoing research on using ultra-wide band (UWB) technology for navigation at the Ohio State University. Our tests have indicated that the UWB two-way time-of-flight ranges under indoor circumstances follow a Gaussian mixture distribution that may be caused by the incompleteness of the functional model. In this case, to adjust the UWB network from the observed ranges, the maximum likelihood estimation (MLE) may provide a better solution for the node coordinates than the widely-used least squares approach. The prerequisite of the maximum likelihood method is to know the probability density functions. The 30 Hz sampling rate of the UWB sensors enables to estimate these functions between each node from the samples in static positioning mode. In order to prove the MLE hypothesis, an UWB network has been established in a multi-path density environment for test data acquisition. The least squares and maximum likelihood coordinate solutions are determined and compared, and the results indicate that better accuracy can be achieved with maximum likelihood estimation.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
郇浩; 陶选如; 陶然; 程小康; 董朝; 李鹏飞
2014-01-01
To reach a compromise between efficient dynamic performance and high tracking accuracy of carrier tracking loop in high-dynamic circumstance which results in large Doppler frequency and Doppler frequency rate-of-change, a fast maximum likelihood estimation method of Doppler frequency rate-of-change is proposed in this paper, and the estimation value is utilized to aid the carrier tracking loop. First, it is pointed out that the maximum likelihood estimation method of Doppler frequency and Doppler frequency rate-of-change is equivalent to the Fractional Fourier Fransform (FrFT). Second, the estimation method of Doppler frequency rate-of-change, which combines the instant self-correlation and the segmental Discrete Fourier Transform (DFT) is proposed to solve the large two-dimensional search calculation amount of the Doppler frequency and Doppler frequency rate-of-change, and the received coarse estimation value is applied to narrow down the search range. Finally, the estimation value is used in the carrier tracking loop to reduce the dynamic stress and improve the tracking accuracy. Theoretical analysis and computer simulation show that the search calculation amount falls to 5.25 percent of the original amount with Signal to Noise Ratio (SNR)-30 dB, and the Root Mean Sguare Error(RMSE) of frequency tracked is only 8.46 Hz/s, compared with the traditional carrier tracking method the tracking sensitivity can be improved more than 3 dB.%高动态环境下接收信号含有较大的多普勒频率及其变化率，传统载波跟踪方法难以在高动态应力和跟踪精度两方面取得较好折中，针对这一问题该文提出一种多普勒频率变化率快速最大似然估计方法，并利用估计值辅助载波跟踪环路。首先指出了多普勒频率及其变化率的最大似然估计可等效采用分数阶傅里叶变换(FrFT)来实现；其次，针对频率及其变化率2维搜索运算量大的问题，提出一种瞬时自相关与分段离
Bakker, M.; Wicherts, J.M.
2014-01-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
Suenimeire Vieira
2012-12-01
Full Text Available INTRODUÇÃO: Um dos benefícios promovidos pelo exercício físico parece ser a melhora da modulação do sistema nervoso autônomo sobre o coração. No entanto, o papel da atividade física como um fator determinante da variabilidade da frequência cardíaca (VFC não está bem estabelecido. Desta forma, o objetivo do estudo foi verificar se há correlação entre a frequência cardíaca de repouso e a carga máxima atingida no teste de esforço físico com os índices de VFC em homens idosos. MÉTODOS: Foram estudados 18 homens idosos com idades entre 60 e 70 anos. Foram feitas as seguintes avaliações: a teste de esforço máximo em cicloergômetro utilizando-se o protocolo de Balke para avaliação da capacidade aeróbia; b registro da frequência cardíaca (FC e dos intervalos R-R durante 15 minutos na condição de repouso em decúbito dorsal. Após a coleta, os dados foram analisados no domínio do tempo, calculando-se o índice RMSSD, e no domínio da frequência, calculando-se os índices de baixa frequência (BF, alta frequência (AF e razão BF/AF. Para verificar se existe associação entre a carga máxima atingida no teste de esforço e os índices de VFC foi aplicado o teste de correlação de Pearson (p 0,05. CONCLUSÃO: Os índices de variabilidade da frequência cardíaca temporal e espectrais estudados não são indicadores do nível de capacidade aeróbia de homens idosos avaliados em cicloergômetro.INTRODUCTION: One of the benefits provided by regular physical activities seems to be the improvement of cardiac autonomic nervous system modulation. However, the role of physical activity as a determinant factor of the heart rate variability (HRV is not well-established. Therefore, the aim of this study was to verify whether there was a correlation between resting heart rate and maximum workload reached in an exercise test with HRV indices in elderly men. METHODS: A study was carried out with 18 elderly men between the ages of
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Ziheng YANG
2004-01-01
众所周知,物种分化年代的估计对分子钟(进化速率恒定)假定很敏感.另一方面,在远缘物种(例如哺乳纲不同目的动物)的比较中,分子钟几乎总是不成立的.这样在估计分化时间时考虑不同进化区系的速率差异至为重要.最大似然法可以很自然地考虑这种速率差异,并且可以同时分析多个基因位点的资料以及同时利用多重化石校正数据.以前提出的似然法需要研究者将进化树的树枝按速率分组,本文提出一个近似方法以使这个过程自动化.本方法综合了以前的似然法、贝斯法及近似速率平滑法的一些特征.此外,还对算法加以改进,以适应综合数据分析时某些基因在某些物种中缺乏资料的情形.应用新提出的方法来分析马达加斯加的倭狐猴的分化年代,并与以前的似然法及贝斯法的分析进行了比较[动物学报50(4):645-656,2004].%Estimation of species divergence times is well-known to be sensitive to violation of the molecular clock assumption (rate constancy over time). However, the molecular clock is almost always violated in comparisons of distantly related species, such as different orders of mammals. Thus it is important to take into account different rates among lineages when divergence times are estimated. The maximum likelihood method provides a framework for accommodating rate variation and can naturally accommodate heterogeneous datasets from multiple loci and fossil calibrations at multiple nodes.Previous implementations of the likelihood method require the researcher to assign branches to different rate classes. In this paper, I implement a heuristic rate-smoothing algorithm (the AHRS algorithm) to automate the assignment of branches to rate groups. The method combines features of previous likelihood, Bayesian and rate-smoothing methods. The likelihood algorithm is also improved to accommodate missing sequences at some loci in the combined analysis. The new
Maximum key-profile correlation (MKC) as a measure of tonal structure in music.
Takeuchi, A H
1994-09-01
Tonal structure is musical organization on the basis of pitch, in which pitches vary in importance and rate of occurrence according to their relationship to a tonal center. Experiment 1 evaluated the maximum key-profile correlation (MKC), a product of Krumhansl and Schmuckler's key-finding algorithm (Krumhansl, 1990), as a measure of tonal structure. The MKC is the maximum correlation coefficient between the pitch class distribution in a musical sample and key profiles, which indicate the stability of pitches with respect to particular tonal centers. The MKC values of melodies correlated strongly with listeners' ratings of tonal structure. To measure the influence of the temporal order of pitches on perceived tonal structure, three measures (fifth span, semitone span, and pitch contour) taken from previous studies of melody perception were also correlated with tonal structure ratings. None of the temporal measures correlated as strongly or as consistently with tonal structure ratings as did the MKC, and nor did combining them with the MKC improve prediction of tonal structure ratings. In Experiment 2, the MKC did not correlate with recognition memory of melodies. However, melodies with very low MKC values were recognized less accurately than melodies with very high MKC values. Although it does not incorporate temporal, rhythmic, or harmonic factors that may influence perceived tonal structure, the MKC can be interpreted as a measure of tonal structure, at least for brief melodies.
Scott, Andrew B; Frost, Paul C
2017-08-15
From 2013 to 2015, citizen scientist volunteers in Toronto, Canada were trained to collect and analyze water quality in urban stormwater ponds. This volunteer sampling was part of the research program, FreshWater Watch (FWW), which aimed to standardize urban water sampling efforts from around the globe. We held training sessions for new volunteers twice yearly and trained a total of 111 volunteers. Over the course of project, ~30% of volunteers participated by collecting water quality data after the training session with 124 individual sampling events at 29 unique locations in Toronto, Canada. A few highly engaged volunteers were most active, with 50% of the samples collected by 5% of trainees. Stormwater ponds generally have poor water quality demonstrated by elevated phosphate concentrations (~30μg/L), nitrate (~427μg/L), and turbidity relative to Canadian water quality standards. Compared to other urban waterbodies in the global program, nutrient concentrations in Toronto's urban stormwater ponds were lower, while turbidity was not markedly different. Toronto FWW (FWW-TO) data was comparable to that measured by standard lab analyses and matched results from previous studies of stormwater ponds in Toronto. Combining observational and chemical data acquired by citizen scientists, macrophyte dominated ponds had lower phosphate concentrations while phytoplankton dominated ponds had lower nitrate concentrations, which indicates a potentially important and unstudied role of internal biogeochemical processes on pond nutrient dynamics. This experience in the FWW demonstrates the capabilities and constraints of citizen science when applied to water quality sampling. While analytical limits on in-field analyses produce higher uncertainty in water quality measurements of individual sites, rapid data collection is possible but depends on the motivation and engagement of the group of volunteers. Ongoing efforts in citizen science will thus need to address sampling effort
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
Ground movement at Somma-Vesuvius from Last Glacial Maximum
Marturano, Aldo; Aiello, Giuseppe; Barra, Diana; Fedele, Lorenzo; Morra, Vincenzo
2012-01-01
Detailed micropalaeontological and petrochemical analyses of rock samples from two boreholes drilled at the archaeological excavations of Herculaneum, ~ 7 km west of the Somma -Vesuvius crater, allowed reconstruction of the Late Quaternary palaeoenvironmental evolution of the site. The data provide clear evidence for ground uplift movements involving the studied area. The Holocenic sedimentary sequence on which the archaeological remains of Herculaneum rest has risen several meters at an average rate of ~ 4 mm/yr. The uplift has involved the western apron of the volcano and the Sebeto-Volla Plain, a populous area including the eastern suburbs of Naples. This is consistent with earlier evidence for similar uplift for the areas of Pompeii and Sarno valley (SE of the volcano) and the Somma -Vesuvius eastern apron. An axisimmetric deep source of strain is considered responsible for the long-term uplift affecting the whole Somma -Vesuvius edifice. The deformation pattern can be modeled as a single pressure source, sited in the lower crust and surrounded by a shell of Maxwell viscoelastic medium, which experienced a pressure pulse that began at the Last Glacial Maximum.