Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Mean square convergence rates for maximum quasi-likelihood estimator
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc
2015-09-01
This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series.
L. Ocola
2008-01-01
Full Text Available Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale I_{MS} as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the I_{MS} scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.
The tropical lapse rate steepened during the Last Glacial Maximum.
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E.; Russell, James M.; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S.; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F. Alayne; Kelly, Meredith A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted. PMID:28138544
Maximum orbit plane change with heat-transfer-rate considerations
Lee, J. Y.; Hull, D. G.
1990-01-01
Two aerodynamic maneuvers are considered for maximizing the plane change of a circular orbit: gliding flight with a maximum thrust segment to regain lost energy (aeroglide) and constant altitude cruise with the thrust being used to cancel the drag and maintain a high energy level (aerocruise). In both cases, the stagnation heating rate is limited. For aeroglide, the controls are the angle of attack, the bank angle, the time at which the burn begins, and the length of the burn. For aerocruise, the maneuver is divided into three segments: descent, cruise, and ascent. During descent the thrust is zero, and the controls are the angle of attack and the bank angle. During cruise, the only control is the assumed-constant angle of attack. During ascent, a maximum thrust segment is used to restore lost energy, and the controls are the angle of attack and bank angle. The optimization problems are solved with a nonlinear programming code known as GRG2. Numerical results for the Maneuverable Re-entry Research Vehicle with a heating-rate limit of 100 Btu/ft(2)-s show that aerocruise gives a maximum plane change of 2 deg, which is only 1 deg larger than that of aeroglide. On the other hand, even though aerocruise requires two thrust levels, the cruise characteristics of constant altitude, velocity, thrust, and angle of attack are easy to control.
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate.
47 CFR 65.700 - Determining the maximum allowable rate of return.
2010-10-01
... CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Maximum Allowable Rates of Return § 65.700 Determining the maximum allowable rate of return. (a) The maximum allowable rate of return for any exchange carrier's earnings on any access service category shall...
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, S.E.; Russell, J.M.; Verschuren, D.; Morrill, C.; De Cort, G.; Sinninghe Damsté, J.S.; Olago, D.; Eggermont, H.; Street-Perrott, F.A.; Kelly, M.A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become lesssteep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountainenvironments. However, the sensitivity of the lapse rate to climate
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S|info:eu-repo/dai/nl/07401370X; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate
Jan Werner; Eva Maria Griebeler
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which...
A Maximum Information Rate Quaternion Filter for Spacecraft Attitude Estimation
Reijneveld, J.; Maas, A.; Choukroun, D.; Kuiper, J.M.
2011-01-01
Building on previous works, this paper introduces a novel continuous-time stochastic optimal linear quaternion estimator under the assumptions of rate gyro measurements and of vector observations of the attitude. A quaternion observation model, which observation matrix is rank degenerate, is reduced
78 FR 13999 - Maximum Interest Rates on Guaranteed Farm Loans
2013-03-04
... have removed the term. Comment: Don't remove the ``average agricultural loan customer'' definition. The... the following methods: Federal eRulemaking Portal: Go to http://www.regulations.gov . Follow the.... Comment: FSA should let the market dictate what interest rate lenders charge guaranteed borrowers, rather...
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-07-30
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.
9 CFR 381.68 - Maximum inspection rates-New turkey inspection system.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Maximum inspection rates-New turkey inspection system. 381.68 Section 381.68 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE... Procedures § 381.68 Maximum inspection rates—New turkey inspection system. (a) The maximum inspection...
Jan Werner
Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule
Werner, Jan; Griebeler, Eva Maria
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of
Maximum Bipartite Matching Size And Application to Cuckoo Hashing
Kanizo, Yossi; Keslassy, Isaac
2010-01-01
Cuckoo hashing with a stash is a robust high-performance hashing scheme that can be used in many real-life applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory over-provisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant left-side vertex degree $d=2$. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the nu...
Application of Maximum Entropy Deconvolution to ${\\gamma}$-ray Skymaps
Raab, Susanne
2015-01-01
Skymaps measured with imaging atmospheric Cherenkov telescopes (IACTs) represent the real source distribution convolved with the point spread function of the observing instrument. Current IACTs have an angular resolution in the order of 0.1$^\\circ$ which is rather large for the study of morphological structures and for comparing the morphology in $\\gamma$-rays to measurements in other wavelengths where the instruments have better angular resolutions. Serendipitously it is possible to approximate the underlying true source distribution by applying a deconvolution algorithm to the observed skymap, thus effectively improving the instruments angular resolution. From the multitude of existing deconvolution algorithms several are already used in astronomy, but in the special case of $\\gamma$-ray astronomy most of these algorithms are challenged due to the high noise level within the measured data. One promising algorithm for the application to $\\gamma$-ray data is the Maximum Entropy Algorithm. The advantages of th...
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.
13 CFR 107.845 - Maximum rate of amortization on Loans and Debt Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum rate of amortization on... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring... rate of amortization on Loans and Debt Securities. The principal of any Loan (or the loan portion...
Application of the maximum entropy method to profile analysis
Armstrong, N.; Kalceff, W. [University of Technology, Department of Applied Physics, Sydney, NSW (Australia); Cline, J.P. [National Institute of Standards and Technology, Gaithersburg, (United States)
1999-12-01
Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc.
The Scaling of Maximum and Basal Metabolic Rates of Mammals and Birds
Barbosa, L A; Silva, J K L; Barbosa, Lauro A.; Garcia, Guilherme J. M.; Silva, Jafferson K. L. da
2004-01-01
Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as $M^{6/7}$, maximum heart rate as $M^{-1/7}$, and muscular capillary density as $M^{-1/7}$, in agreement with data.
Optimum poultry litter rates for maximum profit vs. yield in cotton production
Cotton lint yield responds well to increasing rates of poultry litter fertilization, but little is known of how optimum rates for yield compare with optimum rates for profit. The objectives of this study were to analyze cotton lint yield response to poultry litter application rates, determine and co...
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Morrison, Glenn; Shaughnessy, Richard; Shu, Shi
2011-02-01
A Monte Carlo analysis of indoor ozone levels in four cities was applied to provide guidance to regulatory agencies on setting maximum ozone emission rates from consumer appliances. Measured distributions of air exchange rates, ozone decay rates and outdoor ozone levels at monitoring stations were combined with a steady-state indoor air quality model resulting in emission rate distributions (mg h -1) as a function of % of building hours protected from exceeding a target maximum indoor concentration of 20 ppb. Whole-year, summer and winter results for Elizabeth, NJ, Houston, TX, Windsor, ON, and Los Angeles, CA exhibited strong regional differences, primarily due to differences in air exchange rates. Infiltration of ambient ozone at higher average air exchange rates significantly reduces allowable emission rates, even though air exchange also dilutes emissions from appliances. For Houston, TX and Windsor, ON, which have lower average residential air exchange rates, emission rates ranged from -1.1 to 2.3 mg h -1 for scenarios that protect 80% or more of building hours from experiencing ozone concentrations greater than 20 ppb in summer. For Los Angeles, CA and Elizabeth, NJ, with higher air exchange rates, only negative emission rates were allowable to provide the same level of protection. For the 80th percentile residence, we estimate that an 8-h average limit concentration of 20 ppb would be exceeded, even in the absence of an indoor ozone source, 40 or more days per year in any of the cities analyzed. The negative emission rates emerging from the analysis suggest that only a zero-emission rate standard is prudent for Los Angeles, Elizabeth, NJ and other regions with higher summertime air exchange rates. For regions such as Houston with lower summertime air exchange rates, the higher emission rates would likely increase occupant exposure to the undesirable products of ozone reactions, thus reinforcing the need for zero-emission rate standard.
17 CFR 148.7 - Rulemaking on maximum rates for attorney fees.
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Rulemaking on maximum rates for attorney fees. 148.7 Section 148.7 Commodity and Securities Exchanges COMMODITY FUTURES TRADING... increase in the cost of living or by special circumstances (such as limited availability of...
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
2011-01-01
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males an
Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability
Dell, Z. R.; Pandian, A.; Bhowmick, A. K.; Swisher, N. C.; Stanic, M.; Stellingwerf, R. F.; Abarzhi, S. I.
2017-09-01
We focus on the classical problem of the dependence on the initial conditions of the initial growth-rate of strong shock driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics simulations to describe the simulation data with statistical confidence in a broad parameter regime. For the given values of the shock strength, fluid density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of the RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data.
Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application
Riza Muhida
2013-07-01
Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
Effects of electric field on the maximum electro-spinning rate of silk fibroin solutions.
Park, Bo Kyung; Um, In Chul
2017-02-01
Owing to the excellent cyto-compatibility of silk fibroin (SF) and the simple fabrication of nano-fibrous webs, electro-spun SF webs have attracted much research attention in numerous biomedical fields. Because the production rate of electro-spun webs is strongly dependent on the electro-spinning rate used, the electro-spinning rate becomes more important. In the present study, to improve the electro-spinning rate of SF solutions, various electric fields were applied during electro-spinning of SF, and its effects on the maximum electro-spinning rate of SF solution as well as diameters and molecular conformations of the electro-spun SF fibers were examined. As the electric field was increased, the maximum electro-spinning rate of the SF solution also increased. The maximum electro-spinning rate of a 13% SF solution could be increased 12×by increasing the electric field from 0.5kV/cm (0.25mL/h) to 2.5kV/cm (3.0mL/h). The dependence of the fiber diameter on the present electric field was not significant when using less-concentrated SF solutions (7-9% SF). On the other hand, at higher SF concentrations the electric field had a greater effect on the resulting fiber diameter. The electric field had a minimal effect of the molecular conformation and crystallinity index of the electro-spun SF webs. Copyright © 2016 Elsevier B.V. All rights reserved.
On the rate of convergence of the maximum likelihood estimator of a k-monotone density
WELLNER; Jon; A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.
On the rate of convergence of the maximum likelihood estimator of a K-monotone density
GAO FuChang; WELLNER Jon A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.
MB Distribution and its application using maximum entropy approach
Bhadra Suman
2016-01-01
Full Text Available Maxwell Boltzmann distribution with maximum entropy approach has been used to study the variation of political temperature and heat in a locality. We have observed that the political temperature rises without generating any political heat when political parties increase their attractiveness by intense publicity, but voters do not shift their loyalties. It has also been shown that political heat is generated and political entropy increases with political temperature remaining constant when parties do not change their attractiveness, but voters shift their loyalties (to more attractive parties.
A maximum feasible subset algorithm with application to radiation therapy
Sadegh, Payman
1999-01-01
inequalities. Special classes of this problem are of interest in a variety of areas such as pattern recognition, machine learning, operations research, and medical treatment planning. This problem is generally solvable in exponential time. A heuristic polynomial time algorithm is presented in this paper......Consider a set of linear one sided or two sided inequality constraints on a real vector X. The problem of interest is selection of X so as to maximize the number of constraints that are simultaneously satisfied, or equivalently, combinatorial selection of a maximum cardinality subset of feasible...
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling
2013-09-01
Full Text Available In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain and obtain the optimal conditions and results. On this basis, we further research the effect of localization of CODP on the total cost and the relation of CODP, inventory policy and demand type through the data simulation. The results of simulation show that CODP locates in the downstream of the product life cycle, is a linear function of the product life cycle. The result indicates that the demand forecast is the main factors influencing the total cost; meanwhile the mode of production according to the demand forecast is the deciding factor of the total cost. Also the model can reflect the relation between the total cost of two-stage supply chain and inventory, demand.
A maximum power point tracking algorithm for photovoltaic applications
Nelatury, Sudarshan R.; Gray, Robert
2013-05-01
The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
Seymour, Roger S
2010-09-01
Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...... that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables......The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements...
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Determination of zero-coupon and spot rates from treasury data by maximum entropy methods
Gzyl, Henryk; Mayoral, Silvia
2016-08-01
An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.
Perkell, J S; Hillman, R E; Holmberg, E B
1994-08-01
In previous reports, aerodynamic and acoustic measures of voice production were presented for groups of normal male and female speakers [Holmberg et al., J. Acoust. Soc. Am. 84, 511-529 (1988); J. Voice 3, 294-305 (1989)] that were used as norms in studies of voice disorders [Hillman et al., J. Speech Hear. Res. 32, 373-392 (1989); J. Voice 4, 52-63 (1990)]. Several of the measures were extracted from glottal airflow waveforms that were derived by inverse filtering a high-time-resolution oral airflow signal. Recently, the methods have been updated and a new study of additional subjects has been conducted. This report presents previous (1988) and current (1993) group mean values of sound pressure level, fundamental frequency, maximum airflow declination rate, ac flow, peak flow, minimum flow, ac-dc ratio, inferred subglottal air pressure, average flow, and glottal resistance. Statistical tests indicate overall group differences and differences for values of several individual parameters between the 1988 and 1993 studies. Some inter-study differences in parameter values may be due to sampling effects and minor methodological differences; however, a comparative test of 1988 and 1993 inverse filtering algorithms shows that some lower 1988 values of maximum flow declination rate were due at least in part to excessive low-pass filtering in the 1988 algorithm. The observed differences should have had a negligible influence on the conclusions of our studies of voice disorders.
Michael D. Hare
2014-09-01
Full Text Available A field trial in northeast Thailand during 2011–2013 compared the establishment and growth of 2 Panicum maximum cultivars, Mombasa and Tanzania, sown at seeding rates of 2, 4, 6, 8, 10 and 12 kg/ha. In the first 3 months of establishment, higher sowing rates produced significantly more DM than sowing at 2 kg/ha, but thereafter there were no significant differences in total DM production between sowing rates of 2–12 kg/ha. Lower sowing rates produced fewer tillers/m2 than higher sowing rates but these fewer tillers were significantly heavier than the more numerous smaller tillers produced by higher sowing rates. Mombasa produced 23% more DM than Tanzania in successive wet seasons (7,060 vs. 5,712 kg DM/ha from 16 June to 1 November 2011; and 16,433 vs. 13,350 kg DM/ha from 25 April to 24 October 2012. Both cultivars produced similar DM yields in the dry seasons (November–April, averaging 2,000 kg DM/ha in the first dry season and 1,750 kg DM/ha in the second dry season. Mombasa produced taller tillers (104 vs. 82 cm, longer leaves (60 vs. 47 cm, wider leaves (2 vs. 1.8 cm and heavier tillers (1 vs. 0.7 g than Tanzania but fewer tillers/m2 (260 vs. 304. If farmers improve soil preparation and place more emphasis on sowing techniques, there is potential to dramatically reduce seed costs.Keywords: Guinea grass, tillering, forage production, seeding rates, Thailand.DOI: 10.17138/TGFT(2246-253
Evangelia Karagianni
2016-04-01
Full Text Available By utilizing meteorological data such as relative humidity, temperature, pressure, rain rate and precipitation duration at eight (8 stations in Aegean Archipelagos from six recent years (2007 – 2012, the effect of the weather on Electromagnetic wave propagation is studied. The EM wave propagation characteristics depend on atmospheric refractivity and consequently on Rain-Rate which vary in time and space randomly. Therefore the statistics of radio refractivity, Rain-Rate and related propagation effects are of main interest. This work investigates the maximum value of rain rate in monthly rainfall records, for a 5 min interval comparing it with different values of integration time as well as different percentages of time. The main goal is to determine the attenuation level for microwave links based on local rainfall data for various sites in Greece (L-zone, namely Aegean Archipelagos, with a view on improved accuracy as compared with more generic zone data available. A measurement of rain attenuation for a link in the S-band has been carried out and the data compared with prediction based on the standard ITU-R method.
Kalafut, Bennett; Visscher, Koen
2008-10-01
Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
Phylogenetic prediction of the maximum per capita rate of population growth.
Fagan, William F; Pearson, Yanthe E; Larsen, Elise A; Lynch, Heather J; Turner, Jessica B; Staver, Hilary; Noble, Andrew E; Bewick, Sharon; Goldberg, Emma E
2013-07-22
The maximum per capita rate of population growth, r, is a central measure of population biology. However, researchers can only directly calculate r when adequate time series, life tables and similar datasets are available. We instead view r as an evolvable, synthetic life-history trait and use comparative phylogenetic approaches to predict r for poorly known species. Combining molecular phylogenies, life-history trait data and stochastic macroevolutionary models, we predicted r for mammals of the Caniformia and Cervidae. Cross-validation analyses demonstrated that, even with sparse life-history data, comparative methods estimated r well and outperformed models based on body mass. Values of r predicted via comparative methods were in strong rank agreement with observed values and reduced mean prediction errors by approximately 68 per cent compared with two null models. We demonstrate the utility of our method by estimating r for 102 extant species in these mammal groups with unknown life-history traits.
Statistical properties of the maximum Lyapunov exponent calculated via the divergence rate method.
Franchi, Matteo; Ricci, Leonardo
2014-12-01
The embedding of a time series provides a basic tool to analyze dynamical properties of the underlying chaotic system. To this purpose, the choice of the embedding dimension and lag is crucial. Although several methods have been devised to tackle the issue of the optimal setting of these parameters, a conclusive criterion to make the most appropriate choice is still lacking. An accepted procedure to rank different embedding methods relies on the evaluation of the maximum Lyapunov exponent (MLE) out of embedded time series that are generated by chaotic systems with explicit analytic representation. The MLE is evaluated as the local divergence rate of nearby trajectories. Given a system, embedding methods are ranked according to how close such MLE values are to the true MLE. This is provided by the so-called standard method in a way that exploits the mathematical description of the system and does not require embedding. In this paper we study the dependence of the finite-time MLE evaluated via the divergence rate method on the embedding dimension and lag in the case of time series generated by four systems that are widely used as references in the scientific literature. We develop a completely automatic algorithm that provides the divergence rate and its statistical uncertainty. We show that the uncertainty can provide useful information about the optimal choice of the embedding parameters. In addition, our approach allows us to find which systems provide suitable benchmarks for the comparison and ranking of different embedding methods.
Shi Jingtao; Wu Zhen
2011-01-01
A stochastic maximum principle for the risk-sensitive optimal control prob- lem of jump diffusion processes with an exponential-of-integral cost functional is derived assuming that the value function is smooth, where the diffusion and jump term may both depend on the control. The form of the maximum principle is similar to its risk-neutral counterpart. But the adjoint equations and the maximum condition heavily depend on the risk-sensitive parameter. As applications, a linear-quadratic risk-sensitive control problem is solved by using the maximum principle derived and explicit optimal control is obtained.
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
Alvah C. Stahlnecker IV
2008-12-01
Full Text Available A percentage of either measured or predicted maximum heart rate is commonly used to prescribe and measure exercise intensity. However, maximum heart rate in athletes may be greater during competition or training than during laboratory exercise testing. Thus, the aim of the present investigation was to determine if endurance-trained runners train and compete at or above laboratory measures of 'maximum' heart rate. Maximum heart rates were measured utilising a treadmill graded exercise test (GXT in a laboratory setting using 10 female and 10 male National Collegiate Athletic Association (NCAA division 2 cross-country and distance event track athletes. Maximum training and competition heart rates were measured during a high-intensity interval training day (TR HR and during competition (COMP HR at an NCAA meet. TR HR (207 ± 5.0 b·min-1; means ± SEM and COMP HR (206 ± 4 b·min-1 were significantly (p < 0.05 higher than maximum heart rates obtained during the GXT (194 ± 2 b·min-1. The heart rate at the ventilatory threshold measured in the laboratory occurred at 83.3 ± 2.5% of the heart rate at VO2 max with no differences between the men and women. However, the heart rate at the ventilatory threshold measured in the laboratory was only 77% of the maximal COMP HR or TR HR. In order to optimize training-induced adaptation, training intensity for NCAA division 2 distance event runners should not be based on laboratory assessment of maximum heart rate, but instead on maximum heart rate obtained either during training or during competition
Kleidon, Axel
2009-06-01
The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.
Why does steady-state magnetic reconnection have a maximum local rate of order 0.1?
Liu, Yi-Hsin; Guo, F; Daughton, W; Li, H; Cassak, P A; Shay, M A
2016-01-01
Simulations suggest collisionless steady-state magnetic reconnection of Harris-type current sheets proceeds with a rate of order 0.1, independent of dissipation mechanism. We argue this long-standing puzzle is a result of constraints at the magnetohydrodynamic (MHD) scale. We perform a scaling analysis of the reconnection rate as a function of the opening angle made by the upstream magnetic fields, finding a maximum reconnection rate close to 0.2. The predictions compare favorably to particle-in-cell simulations of relativistic electron-positron and non-relativistic electron-proton reconnection. The fact that simulated reconnection rates are close to the predicted maximum suggests reconnection proceeds near the most efficient state allowed at the MHD-scale. The rate near the maximum is relatively insensitive to the opening angle, potentially explaining why reconnection has a similar fast rate in differing models.
Benício, Kadja; Dias, Fernando A. L.; Gualdi, Lucien P.; Aliverti, Andrea; Resqueti, Vanessa R.; Fregonezi, Guilherme A. F.
2015-01-01
OBJECTIVE: To assess the influence of diaphragmatic activation control (diaphC) on Sniff Nasal-Inspiratory Pressure (SNIP) and Maximum Relaxation Rate of inspiratory muscles (MRR) in healthy subjects. METHOD: Twenty subjects (9 male; age: 23 (SD=2.9) years; BMI: 23.8 (SD=3) kg/m2; FEV1/FVC: 0.9 (SD=0.1)] performed 5 sniff maneuvers in two different moments: with or without instruction on diaphC. Before the first maneuver, a brief explanation was given to the subjects on how to perform the sniff test. For sniff test with diaphC, subjects were instructed to perform intense diaphragm activation. The best SNIP and MRR values were used for analysis. MRR was calculated as the ratio of first derivative of pressure over time (dP/dtmax) and were normalized by dividing it by peak pressure (SNIP) from the same maneuver. RESULTS: SNIP values were significantly different in maneuvers with and without diaphC [without diaphC: -100 (SD=27.1) cmH2O/ with diaphC: -72.8 (SD=22.3) cmH2O; p<0.0001], normalized MRR values were not statistically different [without diaphC: -9.7 (SD=2.6); with diaphC: -8.9 (SD=1.5); p=0.19]. Without diaphC, 40% of the sample did not reach the appropriate sniff criteria found in the literature. CONCLUSION: Diaphragmatic control performed during SNIP test influences obtained inspiratory pressure, being lower when diaphC is performed. However, there was no influence on normalized MRR. PMID:26578254
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
On the maximum rate of change in sunspot number growth and the size of the sunspot cycle
Wilson, Robert M.
1990-01-01
Statistically significant correlations exist between the size (maximum amplitude) of the sunspot cycle and, especially, the maximum value of the rate of rise during the ascending portion of the sunspot cycle, where the rate of rise is computed either as the difference in the month-to-month smoothed sunspot number values or as the 'average rate of growth' in smoothed sunspot number from sunspot minimum. Based on the observed values of these quantities (equal to 10.6 and 4.63, respectively) as of early 1989, it is inferred that cycle 22's maximum amplitude will be about 175 + or - 30 or 185 + or - 10, respectively, where the error bars represent approximately twice the average error found during cycles 10-21 from the two fits.
MAXIMUM PRINCIPLES OF NONHOMOGENEOUS SUBELLIPTIC P-LAPLACE EQUATIONS AND APPLICATIONS
Liu Haifeng; Niu Pengcheng
2006-01-01
Maximum principles for weak solutions of nonhomogeneous subelliptic p-Laplace equations related to smooth vector fields {Xj} satisfying the H(o)rmander condition are proved by the choice of suitable test functions and the adaption of the classical Moser iteration method. Some applications are given in this paper.
Uniform estimate for maximum of randomly weighted sums with applications to insurance risk theory
WANG Dingcheng; SU Chun; ZENG Yong
2005-01-01
This paper obtains the uniform estimate for maximum of sums of independent and heavy-tailed random variables with nonnegative random weights, which can be arbitrarily dependent of each other. Then the applications to ruin probabilities in a discrete time risk model with dependent stochastic returns are considered.
Ambarita, Himsar; Kishinami, Koki; Daimaruya, Mashashi; Tokura, Ikuo; Kawai, Hideki; Suzuki, Jun; Kobiyama, Mashayosi; Ginting, Armansyah
The present paper is a study on the optimum plate to plate spacing for maximum heat transfer rate from a flat plate type heat exchanger. The heat exchanger consists of a number of parallel flat plates. The working fluids are flowed at the same operational conditions, either fixed pressure head or fixed fan power input. Parallel and counter flow directions of the working fluids were considered. While the volume of the heat exchanger is kept constant, plate number was varied. Hence, the spacing between plates as well as heat transfer rate will vary and there exists a maximum heat transfer rate. The objective of this paper is to seek the optimum plate to plate spacing for maximum heat transfer rate. In order to solve the problem, analytical and numerical solutions have been carried out. In the analytical solution, the correlations of the optimum plate to plate spacing as a function of the non-dimensional parameters were developed. Furthermore, the numerical simulation is carried out to evaluate the correlations. The results show that the optimum plate to plate spacing for a counter flow heat exchanger is smaller than parallel flow ones. On the other hand, the maximum heat transfer rate for a counter flow heat exchanger is bigger than parallel flow ones.
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
2014-01-01
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
7 CFR 1.187 - Rulemaking on maximum rates for attorney fees.
2010-01-01
... the types of proceedings in which the rate should be used. It also should explain fully the reasons... certain types of proceedings), the Department may adopt regulations providing that attorney fees may be awarded at a rate higher than $125 per hour in some or all of the types of proceedings covered by...
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
McCarthy, C M; Taylor, M A; Dennis, M W
1987-01-01
Mycobacterium avium is a human pathogen which may cause either chronic or disseminated disease and the organism exhibits a slow rate of growth. This study provides information on the growth rate of the organism in chronically infected mice and its maximal growth rate in vitro. M. avium was grown in continuous culture, limited for nitrogen with 0.5 mM ammonium chloride and dilution rates that ranged from 0.054 to 0.153 h-1. The steady-state concentration of ammonia nitrogen and M. avium cells for each dilution rate were determined. The bacterial saturation constant for growth-limiting ammonia was 0.29 mM (4 micrograms nitrogen/ml) and, from this, the maximal growth rate for M. avium was estimated to be 0.206 h-1 or a doubling time of 3.4 h. BALB/c mice were infected intravenously with 3 x 10(6) colony-forming units and a chronic infection resulted, typical of virulent M. avium strains. During a period of 3 months, the number of mycobacteria remained constant in the lungs, but increased 30-fold and 8,900-fold, respectively, in the spleen and mesenteric lymph nodes. The latter increase appeared to be due to proliferation in situ. The generation time of M. avium in the mesenteric lymph nodes was estimated to be 7 days.
Hyland, D. C.
1985-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modelling and reduced order control design method for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed and the application of the methodology to several large space structure (LSS) problems of representative complexity is illustrated.
Quinn, T Alexander; Kohl, Peter
2016-12-01
Mechanical stimulation (MS) represents a readily available, non-invasive means of pacing the asystolic or bradycardic heart in patients, but benefits of MS at higher heart rates are unclear. Our aim was to assess the maximum rate and sustainability of excitation by MS vs. electrical stimulation (ES) in the isolated heart under normal physiological conditions. Trains of local MS or ES at rates exceeding intrinsic sinus rhythm (overdrive pacing; lowest pacing rates 2.5±0.5 Hz) were applied to the same mid-left ventricular free-wall site on the epicardium of Langendorff-perfused rabbit hearts. Stimulation rates were progressively increased, with a recovery period of normal sinus rhythm between each stimulation period. Trains of MS caused repeated focal ventricular excitation from the site of stimulation. The maximum rate at which MS achieved 1:1 capture was lower than during ES (4.2±0.2 vs. 5.9±0.2 Hz, respectively). At all overdrive pacing rates for which repetitive MS was possible, 1:1 capture was reversibly lost after a finite number of cycles, even though same-site capture by ES remained possible. The number of MS cycles until loss of capture decreased with rising stimulation rate. If interspersed with ES, the number of MS to failure of capture was lower than for MS only. In this study, we demonstrate that the maximum pacing rate at which MS can be sustained is lower than that for same-site ES in isolated heart, and that, in contrast to ES, the sustainability of successful 1:1 capture by MS is limited. The mechanism(s) of differences in MS vs. ES pacing ability, potentially important for emergency heart rhythm management, are currently unknown, thus warranting further investigation. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.
Maximum Rate of Growth of Enstrophy in Solutions of the Fractional Burgers Equation
Yun, Dongfang
2016-01-01
This investigation is a part of a research program aiming to characterize the extreme behavior possible in hydrodynamic models by probing the sharpness of estimates on the growth of certain fundamental quantities. We consider here the rate of growth of the classical and fractional enstrophy in the fractional Burgers equation in the subcritical, critical and supercritical regime. First, we obtain estimates on these rates of growth and then show that these estimates are sharp up to numerical prefactors. In particular, we conclude that the power-law dependence of the enstrophy rate of growth on the fractional dissipation exponent has the same global form in the subcritical, critical and parts of the supercritical regime. This is done by numerically solving suitably defined constrained maximization problems and then demonstrating that for different values of the fractional dissipation exponent the obtained maximizers saturate the upper bounds in the estimates as the enstrophy increases. In addition, nontrivial be...
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Riisgård, Hans Ulrik; Larsen, Poul Scheel; Pleissner, Daniel
2014-01-01
rate (F, l h-1), W (g), and L (mm) as described by the equations: FW = aWb and FL = cLd, respectively. This is done by using available and new experimental laboratory data on M. edulis obtained by members of the same research team using different methods and controlled diets of cultivated algal cells...
Maximum organic loading rate for the single-stage wet anaerobic digestion of food waste.
Nagao, Norio; Tajima, Nobuyuki; Kawai, Minako; Niwa, Chiaki; Kurosawa, Norio; Matsuyama, Tatsushi; Yusoff, Fatimah Md; Toda, Tatsuki
2012-08-01
Anaerobic digestion of food waste was conducted at high OLR from 3.7 to 12.9 kg-VS m(-3) day(-1) for 225 days. Periods without organic loading were arranged between the each loading period. Stable operation at an OLR of 9.2 kg-VS (15.0 kg-COD) m(-3) day(-1) was achieved with a high VS reduction (91.8%) and high methane yield (455 mL g-VS-1). The cell density increased in the periods without organic loading, and reached to 10.9×10(10) cells mL(-1) on day 187, which was around 15 times higher than that of the seed sludge. There was a significant correlation between OLR and saturated TSS in the sludge (y=17.3e(0.1679×), r(2)=0.996, P<0.05). A theoretical maximum OLR of 10.5 kg-VS (17.0 kg-COD) m(-3) day(-1) was obtained for mesophilic single-stage wet anaerobic digestion that is able to maintain a stable operation with high methane yield and VS reduction.
Baofeng Shi
2016-01-01
Full Text Available This paper introduces a novel decision assessment method which is suitable for customers’ credit risk evaluation and credit decision. First of all, the paper creates an optimal credit rating model, and it consisted of an objective function and two constraint conditions. The first constraint condition of the strictly increasing LGDs eliminates the unreasonable phenomenon that the higher the credit rating is, the higher the LGD (loss given default is. Secondly, on the basis of the credit rating results, a credit decision-making assessment model based on measuring the acceptable maximum LGD of commercial banks is established. Thirdly, empirical results using the data on 2817 farmers’ microfinance of a Chinese commercial bank suggest that the proposed approach can accurately find out the good customers from all the loan applications. Moreover, our approach contributes to providing a reference for decision assessment of customers in other commercial banks in the world.
Validity of heart rate based nomogram fors estimation of maximum oxygen uptake in Indian population.
Kumar, S Krishna; Khare, P; Jaryal, A K; Talwar, A
2012-01-01
Maximal oxygen uptake (VO2max) during a graded maximal exercise test is the objective method to assess cardiorespiratory fitness. Maximal oxygen uptake testing is limited to only a few laboratories as it requires trained personnel and strenuous effort by the subject. At the population level, submaximal tests have been developed to derive VO2max indirectly based on heart rate based nomograms or it can be calculated using anthropometric measures. These heart rate based predicted standards have been developed for western population and are used routinely to predict VO2max in Indian population. In the present study VO2max was directly measured by maximal exercise test using a bicycle ergometer and was compared with VO2max derived by recovery heart rate in Queen's College step test (QCST) (PVO2max I) and with VO2max derived from Wasserman equation based on anthropometric parameters and age (PVO2max II) in a well defined age group of healthy male adults from New Delhi. The values of directly measured VO2max showed no significant correlation either with the estimated VO2max with QCST or with VO2max predicted by Wasserman equation. Bland and Altman method of approach for limit of agreement between VO2max and PVO2max I or PVO2max II revealed that the limits of agreement between directly measured VO2max and PVO2max I or PVO2max II was large indicating inapplicability of prediction equations of western population in the population under study. Thus it is evident that there is an urgent need to develop nomogram for Indian population, may be even for different ethnic sub-population in the country.
Longitudinal Examination of Age-Predicted Symptom-Limited Exercise Maximum Heart Rate
Zhu, Na; Suarez, Jose; Sidney, Steve; Sternfeld, Barbara; Schreiner, Pamela J.; Carnethon, Mercedes R.; Lewis, Cora E.; Crow, Richard S.; Bouchard, Claude; Haskell, William; Jacobs, David R.
2010-01-01
Purpose To estimate the association of age with maximal heart rate (MHR). Methods Data were obtained in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Participants were black and white men and women aged 18-30 in 1985-86 (year 0). A symptom-limited maximal graded exercise test was completed at years 0, 7, and 20 by 4969, 2583, and 2870 participants, respectively. After exclusion 9622 eligible tests remained. Results In all 9622 tests, estimated MHR (eMHR, beats/minute) had a quadratic relation to age in the age range 18 to 50 years, eMHR=179+0.29*age-0.011*age2. The age-MHR association was approximately linear in the restricted age ranges of consecutive tests. In 2215 people who completed both year 0 and 7 tests (age range 18 to 37), eMHR=189–0.35*age; and in 1574 people who completed both year 7 and 20 tests (age range 25 to 50), eMHR=199–0.63*age. In the lowest baseline BMI quartile, the rate of decline was 0.20 beats/minute/year between years 0-7 and 0.51 beats/minute/year between years 7-20; while in the highest baseline BMI quartile there was a linear rate of decline of approximately 0.7 beats/minute/year over the full age of 18 to 50 years. Conclusion Clinicians making exercise prescriptions should be aware that the loss of symptom-limited MHR is much slower at young adulthood and more pronounced in later adulthood. In particular, MHR loss is very slow in those with lowest BMI below age 40. PMID:20639723
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
with an exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model...... are used for parameter estimation. The data is log-transformed such that a linear model can be applied. The transformation changes the variance structure, and hence an OD-dependent variance is implemented in the model. The autocorrelation in the data is demonstrated, and a correlation model...... that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...
Three dimensional winds: A maximum cross-correlation application to elastic lidar data
Buttler, William Tillman [Univ. of Texas, Austin, TX (United States)
1996-05-01
Maximum cross-correlation techniques have been used with satellite data to estimate winds and sea surface velocities for several years. Los Alamos National Laboratory (LANL) is currently using a variation of the basic maximum cross-correlation technique, coupled with a deterministic application of a vector median filter, to measure transverse winds as a function of range and altitude from incoherent elastic backscatter lidar (light detection and ranging) data taken throughout large volumes within the atmospheric boundary layer. Hourly representations of three-dimensional wind fields, derived from elastic lidar data taken during an air-quality study performed in a region of complex terrain near Sunland Park, New Mexico, are presented and compared with results from an Environmental Protection Agency (EPA) approved laser doppler velocimeter. The wind fields showed persistent large scale eddies as well as general terrain-following winds in the Rio Grande valley.
Optimizing nitrogen rates in the midwestern United States for maximum ecosystem value
Patrick M. Ewing
2015-03-01
Full Text Available The importance of corn production to the midwestern United States cannot be overestimated. However, high production requires high nitrogen fertilization, which carries costs to environmental services such as water quality. Therefore, a trade-off exists between the production of corn yield and water quality. We used the Groundwater Vulnerability Assessment for Shallow depths and Crop Environment Resource Synthesis-Maize models to investigate the nature of this trade-off while testing the Simple Analytic Framework trade-offs featured in this Special Feature. First, we estimated the current levels of yield and water quality production in northeastern Iowa and southern Minnesota at the 1-square-kilometer, county, and regional scales. We then constructed an efficiency frontier from optimized nitrogen application patterns to maximize the production of both yield and water quality. Results highlight the context dependency of this trade-off, but show room for increasing the production of both services to the benefit of all stakeholders. We discuss these results in the context of spatial scale, biophysical limitations to the production of services, and stakeholder outcomes given disparate power balances and biophysical contexts.
Snelling, Edward P; Seymour, Roger S; Matthews, Philip G D; Runciman, Sue; White, Craig R
2011-10-01
The hemimetabolous migratory locust Locusta migratoria progresses through five instars to the adult, increasing in size from 0.02 to 0.95 g, a 45-fold change. Hopping locomotion occurs at all life stages and is supported by aerobic metabolism and provision of oxygen through the tracheal system. This allometric study investigates the effect of body mass (Mb) on oxygen consumption rate (MO2, μmol h(-1)) to establish resting metabolic rate (MRO2), maximum metabolic rate during hopping (MMO2) and maximum metabolic rate of the hopping muscles (MMO2,hop) in first instar, third instar, fifth instar and adult locusts. Oxygen consumption rates increased throughout development according to the allometric equations MRO2=30.1Mb(0.83±0.02), MMO2=155Mb(1.01±0.02), MMO2,hop=120Mb(1.07±0.02) and, if adults are excluded, MMO2,juv=136Mb(0.97±0.02) and MMO2,juv,hop=103Mb(1.02±0.02). Increasing body mass by 20-45% with attached weights did not increase mass-specific MMO2 significantly at any life stage, although mean mass-specific hopping MO2 was slightly higher (ca. 8%) when juvenile data were pooled. The allometric exponents for all measures of metabolic rate are much greater than 0.75, and therefore do not support West, Brown and Enquist’s optimised fractal network model, which predicts that metabolism scales with a 3⁄4-power exponent owing to limitations in the rate at which resources can be transported within the body.
The Stampacchia maximum principle for stochastic partial differential equations and applications
Chekroun, Mickaël D.; Park, Eunhee; Temam, Roger
2016-02-01
Stochastic partial differential equations (SPDEs) are considered, linear and nonlinear, for which we establish comparison theorems for the solutions, or positivity results a.e., and a.s., for suitable data. Comparison theorems for SPDEs are available in the literature. The originality of our approach is that it is based on the use of truncations, following the Stampacchia approach to maximum principle. We believe that our method, which does not rely too much on probability considerations, is simpler than the existing approaches and to a certain extent, more directly applicable to concrete situations. Among the applications, boundedness results and positivity results are respectively proved for the solutions of a stochastic Boussinesq temperature equation, and of reaction-diffusion equations perturbed by a non-Lipschitz nonlinear noise. Stabilization results to a Chafee-Infante equation perturbed by a nonlinear noise are also derived.
Facility Location Using Maximum Covering Model: An Application In Retail Sector
Çiğdem Alabaş Uslu
2012-06-01
Full Text Available In this study, a store location problem has been addressed for service sector, and a real application problem for a leading firm in modern retail sector in Turkey has been solved by modeling with mathematical programming. Since imitating a store location in retail sector is hard and provides an important competitive edge, determining accurate store locations becomes a critical issue for the management. Application problem solved in this study is to choose appropriate areas for opening stores with different capacities and quantities in Umraniye, Istanbul. The problem has been converted to a maximum set covering model by considering after sales forecasts and also by taking into account several decision criteria foreseen by the management. Optimum solutions of the developed model for different scenarios has been obtained using a package program and presented to the firm management.
Movahednejad, E.; Ommi, F.; Hosseinalipour, S. M.; Chen, C. P.; Mahdavi, S. A.
2011-12-01
This paper describes the implementation of the instability analysis of wave growth on liquid jet surface, and maximum entropy principle (MEP) for prediction of droplet diameter distribution in primary breakup region. The early stage of the primary breakup, which contains the growth of wave on liquid-gas interface, is deterministic; whereas the droplet formation stage at the end of primary breakup is random and stochastic. The stage of droplet formation after the liquid bulk breakup can be modeled by statistical means based on the maximum entropy principle. The MEP provides a formulation that predicts the atomization process while satisfying constraint equations based on conservations of mass, momentum and energy. The deterministic aspect considers the instability of wave motion on jet surface before the liquid bulk breakup using the linear instability analysis, which provides information of the maximum growth rate and corresponding wavelength of instabilities in breakup zone. The two sub-models are coupled together using momentum source term and mean diameter of droplets. This model is also capable of considering drag force on droplets through gas-liquid interaction. The predicted results compared favorably with the experimentally measured droplet size distributions for hollow-cone sprays.
Applications of the principle of maximum entropy: from physics to ecology.
Banavar, Jayanth R; Maritan, Amos; Volkov, Igor
2010-02-17
There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.
Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier
2011-10-01
Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Kruse, Marcelo Lapa; Kruse, José Cláudio Lupi; Leiria, Tiago Luiz Luz; Pires, Leonardo Martins; Gensas, Caroline Saltz; Gomes, Daniel Garcia; Boris, Douglas; Mantovani, Augusto; Lima, Gustavo Glotz de
2014-12-01
Occurrences of asymptomatic atrial fibrillation (AF) are common. It is important to identify AF because it increases morbidity and mortality. 24-hour Holter has been used to detect paroxysmal AF (PAF). The objective of this study was to investigate the relationship between occurrence of PAF in 24-hour Holter and the symptoms of the population studied. Cross-sectional study conducted at a cardiology hospital. 11,321 consecutive 24-hour Holter tests performed at a referral service were analyzed. Patients with pacemakers or with AF throughout the recording were excluded. There were 75 tests (0.67%) with PAF. The mean age was 67 ± 13 years and 45% were female. The heart rate (HR) over the 24 hours was a minimum of 45 ± 8 bpm, mean of 74 ± 17 bpm and maximum of 151 ± 32 bpm. Among the tests showing PAF, only 26% had symptoms. The only factor tested that showed a correlation with symptomatic AF was maximum HR (165 ± 34 versus 147 ± 30 bpm) (P = 0.03). Use of beta blockers had a protective effect against occurrence of PAF symptoms (odds ratio: 0.24, P = 0.031). PAF is a rare event in 24-hour Holter. The maximum HR during the 24 hours was the only factor correlated with symptomatic AF, and use of beta blockers had a protective effect against AF symptom occurrence.
Rate-independent systems theory and application
Mielke, Alexander
2015-01-01
This monograph provides both an introduction to and a thorough exposition of the theory of rate-independent systems, which the authors have worked on with a number of collaborators over many years. The focus is mostly on fully rate-independent systems, first on an abstract level with or without a linear structure, discussing various concepts of solutions with full mathematical rigor. The usefulness of the abstract concepts is then demonstrated on the level of various applications primarily in continuum mechanics of solids, including suitable approximation strategies with guaranteed numerical stability and convergence. Particular applications concern inelastic processes such as plasticity, damage, phase transformations, or adhesive-type contacts both at small strains and at finite strains. Other physical systems such as magnetic or ferroelectric materials, and couplings to rate-dependent thermodynamic models are also considered. Selected applications are accompanied by numerical simulations illustrating both t...
Karia Ritesh M
2012-04-01
Full Text Available Objective: Objectives of this study is to study effect of smoking on Peak Expiratory Flow Rate and Maximum Voluntary Ventilation in apparently healthy tobacco smokers and non-smokers and to compare the result of both the studies to assess the effects of smoking Method: The present study was carried out by computerized software of Pulmonary Function Test named ‘Spiro Excel’ on 50 non-smokers and 50 smokers. Smokers are divided in three gropus. Full series of test take 4 to 5 minutes. Tests were compared in the both smokers and non-smokers group by the ‘unpaired t test’. Statistical significance was indicated by ‘p’ value < 0.05. Results: From the result it is found that actual value of Peak Expiratory Flow Rate and Maximum Voluntary Ventilation are significantly lower in all smokers group than non-smokers. The difference of actual mean value is increases as the degree of smoking increases. [National J of Med Res 2012; 2(2.000: 191-193
Siegler, Jason C; Marshall, Paul W M; Raftry, Sean; Brooks, Cristy; Dowswell, Ben; Romero, Rick; Green, Simon
2013-12-01
The purpose of this investigation was to assess the influence of sodium bicarbonate supplementation on maximal force production, rate of force development (RFD), and muscle recruitment during repeated bouts of high-intensity cycling. Ten male and female (n = 10) subjects completed two fixed-cadence, high-intensity cycling trials. Each trial consisted of a series of 30-s efforts at 120% peak power output (maximum graded test) that were interspersed with 30-s recovery periods until task failure. Prior to each trial, subjects consumed 0.3 g/kg sodium bicarbonate (ALK) or placebo (PLA). Maximal voluntary contractions were performed immediately after each 30-s effort. Maximal force (F max) was calculated as the greatest force recorded over a 25-ms period throughout the entire contraction duration while maximal RFD (RFD max) was calculated as the greatest 10-ms average slope throughout that same contraction. F max declined similarly in both the ALK and PLA conditions, with baseline values (ALK: 1,226 ± 393 N; PLA: 1,222 ± 369 N) declining nearly 295 ± 54 N [95% confidence interval (CI) = 84-508 N; P force vs. maximum rate of force development during a whole body fatiguing task.
Larson, Eric D.; St. Clair, Joshua R.; Sumner, Whitney A.; Bannister, Roger A.; Proenza, Cathy
2013-01-01
An inexorable decline in maximum heart rate (mHR) progressively limits human aerobic capacity with advancing age. This decrease in mHR results from an age-dependent reduction in “intrinsic heart rate” (iHR), which is measured during autonomic blockade. The reduced iHR indicates, by definition, that pacemaker function of the sinoatrial node is compromised during aging. However, little is known about the properties of pacemaker myocytes in the aged sinoatrial node. Here, we show that depressed excitability of individual sinoatrial node myocytes (SAMs) contributes to reductions in heart rate with advancing age. We found that age-dependent declines in mHR and iHR in ECG recordings from mice were paralleled by declines in spontaneous action potential (AP) firing rates (FRs) in patch-clamp recordings from acutely isolated SAMs. The slower FR of aged SAMs resulted from changes in the AP waveform that were limited to hyperpolarization of the maximum diastolic potential and slowing of the early part of the diastolic depolarization. These AP waveform changes were associated with cellular hypertrophy, reduced current densities for L- and T-type Ca2+ currents and the “funny current” (If), and a hyperpolarizing shift in the voltage dependence of If. The age-dependent reduction in sinoatrial node function was not associated with changes in β-adrenergic responsiveness, which was preserved during aging for heart rate, SAM FR, L- and T-type Ca2+ currents, and If. Our results indicate that depressed excitability of individual SAMs due to altered ion channel activity contributes to the decline in mHR, and thus aerobic capacity, during normal aging. PMID:24128759
The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis
Chen Yidong
2004-01-01
Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.
NONE
2000-07-01
The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.
Applications of non-standard maximum likelihood techniques in energy and resource economics
Moeltner, Klaus
Two important types of non-standard maximum likelihood techniques, Simulated Maximum Likelihood (SML) and Pseudo-Maximum Likelihood (PML), have only recently found consideration in the applied economic literature. The objective of this thesis is to demonstrate how these methods can be successfully employed in the analysis of energy and resource models. Chapter I focuses on SML. It constitutes the first application of this technique in the field of energy economics. The framework is as follows: Surveys on the cost of power outages to commercial and industrial customers usually capture multiple observations on the dependent variable for a given firm. The resulting pooled data set is censored and exhibits cross-sectional heterogeneity. We propose a model that addresses these issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order cumulative distribution terms. This adjustment requires the use of SML in the estimation process. Our framework allows for a more comprehensive analysis of outage costs than existing models, which rely on the assumptions of parameter constancy and cross-sectional homogeneity. Our results strongly reject both of these restrictions. The central topic of the second Chapter is the use of PML, a robust estimation technique, in count data analysis of visitor demand for a system of recreation sites. PML has been popular with researchers in this context, since it guards against many types of mis-specification errors. We demonstrate, however, that estimation results will generally be biased even if derived through PML if the recreation model is based on aggregate, or zonal data. To countervail this problem, we propose a zonal model of recreation that captures some of the underlying heterogeneity of individual visitors by incorporating distributional information on per-capita income into the aggregate demand function. This adjustment
Rezaeian Mahdi
2015-01-01
Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.
Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N
2014-06-01
Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Jorge Cuadrado Reyes
2011-05-01
Full Text Available Abstract This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A lineal regression analysis was done to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing.. Key words: workout control, functional evaluation, prediction equation.
Application of Artificial Bee Colony Algorithm to Maximum Likelihood DOA Estimation
Zhicheng Zhang; Jun Lin; Yaowu Shi
2013-01-01
Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation,but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use.To reduce the high computational burden of ML method and make it more suitable to engineering applications,we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation.As a recently proposed bio-inspired computing algorithm,ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment.It offers an excellent alternative to the conventional methods in ML-DOA estimation.The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence,Signal-to-Noise Ratio (SNR),and number of iterations.The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated.Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.
Sheen, D. H.; Seong, Y. J.; Park, J. H.; Lim, I. S.
2015-12-01
From the early of this year, the Korea Meteorological Administration (KMA) began to operate the first stage of an earthquake early warning system (EEWS) and provide early warning information to the general public. The earthquake early warning system (EEWS) in the KMA is based on the Earthquake Alarm Systems version 2 (ElarmS-2), developed at the University of California Berkeley. This method estimates the earthquake location using a simple grid search algorithm that finds the location with the minimum variance of the origin time on successively finer grids. A robust maximum likelihood earthquake location (MAXEL) method for early warning, based on the equal differential times of P arrivals, was recently developed. The MAXEL has been demonstrated to be successful in determining the event location, even when an outlier is included in the small number of P arrivals. This presentation details the application of the MAXEL to the EEWS of the KMA, its performance evaluation over seismic networks in South Korea with synthetic data, and comparison of statistics of earthquake locations based on the ElarmS-2 and the MAXEL.
Yong, Zhengdong; Zhang, Senlin; Gong, Chensheng; He, Sailing
2016-04-01
Plasmonics offer an exciting way to mediate the interaction between light and matter, allowing strong field enhancement and confinement, large absorption and scattering at resonance. However, simultaneous realization of ultra-narrow band perfect absorption and electromagnetic field enhancement is challenging due to the intrinsic high optical losses and radiative damping in metals. Here, we propose an all-metal plasmonic absorber with an absorption bandwidth less than 8 nm and polarization insensitive absorptivity exceeding 99%. Unlike traditional Metal-Dielectric-Metal configurations, we demonstrate that the narrowband perfect absorption and field enhancement are ascribed to the vertical gap plasmonic mode in the deep subwavelength scale, which has a high quality factor of 120 and mode volume of about 10-4 × (λres/n)3. Based on the coupled mode theory, we verify that the diluted field enhancement is proportional to the absorption, and thus perfect absorption is critical to maximum field enhancement. In addition, the proposed perfect absorber can be operated as a refractive index sensor with a sensitivity of 885 nm/RIU and figure of merit as high as 110. It provides a new design strategy for narrow band perfect absorption and local field enhancement, and has potential applications in biosensors, filters and nonlinear optics.
2010-07-01
... PREPARING TOMORROW'S TEACHERS TO USE TECHNOLOGY § 614.6 What is the maximum indirect cost rate for all... requirements; or (3) Charged by the grantee to another Federal award. (Authority: 20 U.S.C. 6832)...
Rosewarne, P J; Wilson, J M; Svendsen, J C
2016-01-01
Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology.
Sada, H
1978-10-01
Effects of phentolamine (13.3, 26.5 and 53.0 micron), alprenolol (3.5, 7.0 and 17.5 micron) and prenylamine (2.4, 4.8 and 11.9 micron) on the transmembrane potential were studied in isolated guinea-pig papillary muscles, superfused with Tyrode's solution. 1. Phentolamine, alprenolol and prenylamine reduced the maximum rate of rise of action potential (.Vmax) dose-dependently. Higher concentrations of phentolamine and prenylamine caused a loss of plateau in a majority of the preparations. Resting potential was not altered by any of the drugs. Readmittance of drug-free Tyrode's solution reversed these changes induced by 13.3 micron of phentolamine and all conconcentrations of alprenolol almost completely but those induced by higher concentrations of phentolamine and all concentrations of prenylamine only slightly. 2. .Vmax at steady state was increased with decreasing driving frequencies (0.5 and 0.25 Hz) and was decreased with increasing ones (2--5 Hz) in comparison with that at 1 Hz. Such changes were all exaggerated by the above drugs, particularly by prenylamine. 3. Prenylamine and, to a lesser degree, phentolamine and alprenolol delayed dose-dependently the recovery process of .Vmax in premature responses. 4. .Vmax in the first response after interruption of stimulation recovered toward the predrug value in the presence of the above three drugs. The time constants of recovery process ranged between 10.5 and 15.0s for phentolamine, between 4.5 and 15.5s for alprenolol. The time constant of the main component was estimated to be approximately 2s for the recovery process with prenylamine. 5. On the basis of the model recently proposed by Hondeghem and Katzung (1977), it is suggested that the drug molecules associate with the open sodium channels and dissociated slowly from the closed channels and that the inactivation parameter in the drug-associated channels is shifted in the hyperpolarizing direction.
Mazhar A. Memon
2016-04-01
Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.
Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.
Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man
2009-10-01
In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.
Meng-Hui Wang
2015-08-01
Full Text Available Sliding mode strategy (SMS for maximum power point tracking (MPPT is used in this study of a human power generation system. This approach ensures maximum power at different rotation speeds to increase efficiency and corrects for the lack of robustness in traditional methods. The intelligent extension theory is used to reduce input saturation and high frequency switching in sliding mode strategy, as well as to increase the efficiency and response speed. The experimental results show that the efficiency of the extension SMS (ESMS is 5% higher than in traditional SMS, and the response is 0.5 s faster.
The Distribution of Maximum Flow with Application to Multi-State Reliability Systems.
1985-11-01
in 0( lVI • JEJ )time, using the max-flow algorithm of Itai and Shiloach (1979). Frank and Frisch (1971) provide a comprehensive discussion of the...maximum flow; for example, taking 0( Ivi log IVI time per replication for a planar network ( Itai and Shiloach 1979). With regard to computing the cell...Edition 8, Houston, Texas. 11. Itai , A. and Y. Shiloach (1979). Maximum flow in planar networks, SIAM J. Comput., 8, 135-150. 12. Kulkarni, V.G. and V. G
Wang, Tianxiao
2010-01-01
This paper formulates and studies a stochastic maximum principle for forward-backward stochastic Volterra integral equations (FBSVIEs in short), while the control area is assumed to be convex. Then a linear quadratic (LQ in short) problem for backward stochastic Volterra integral equations (BSVIEs in short) is present to illustrate the aforementioned optimal control problem. Motivated by the technical skills in solving above problem, a more convenient and briefer method for the unique solvability of M-solution for BSVIEs is proposed. At last, we will investigate a risk minimization problem by means of the maximum principle for FBSVIEs. Closed-form optimal portfolio is obtained in some special cases.
El-Diasty, Fouad; El-Hennawi, H. A.; El-Ghandoor, H.; Soliman, Mona A.
2013-12-01
Intermodal and intramodal dispersions signify one of the problems in graded-index multi-mode optical fibers (GRIN) used for LAN communication systems and for sensing applications. A central index dip (depression) in the profile of core refractive-index may occur due to the CVD fabrication processes. The index dip may also be intentionally designed to broaden the fundamental mode field profile toward a plateau-like distribution, which have advantages for fiber-source connections, fiber amplifiers and self-imaging applications. Effect of core central index dip on the propagation parameters of GRIN fiber, such as intermodal dispersion, intramodal dispersion and root-mean-square broadening, is investigated. The conventional methods usually study optical signal propagation in optical fiber in terms of mode characteristics and the number of modes, but in this work multiple-beam Fizeau interferometry is proposed as an inductive but alternative methodology to afford a radial approach to determine dispersion, pulse broadening and maximum transmission rate in GRIN optical fiber having a central index dip.
Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array
Lihua Wang
2014-01-01
Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.
Jingtao Shi
2013-01-01
Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.
Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications
Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua
2015-01-01
-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...
Izsak, F.
2006-01-01
A numerical maximum likelihood (ML) estimation procedure is developed for the constrained parameters of multinomial distributions. The main dif��?culty involved in computing the likelihood function is the precise and fast determination of the multinomial coef��?cients. For this the coef��?cients are
Modelling predation as a capped rate stochastic process, with applications to fish recruitment
James, Alex; Paul D Baxter; Pitchford, Jonathan W
2005-01-01
Many mathematical models use functions the value of which cannot exceed some physically or biologically imposed maximum value. A model can be described as ‘capped-rate’ when the rate of change of a variable cannot exceed a maximum value. This presents no problem when the models are deterministic but, in many applications, results from deterministic models are at best misleading. The need to account for stochasticity, both demographic and environmental, in models is therefore important but, as...
Woonki Na
2017-03-01
Full Text Available This paper presents an improved maximum power point tracking (MPPT algorithm using a fuzzy logic controller (FLC in order to extract potential maximum power from photovoltaic cells. The objectives of the proposed algorithm are to improve the tracking speed, and to simultaneously solve the inherent drawbacks such as slow tracking in the conventional perturb and observe (P and O algorithm. The performances of the conventional P and O algorithm and the proposed algorithm are compared by using MATLAB/Simulink in terms of the tracking speed and steady-state oscillations. Additionally, both algorithms were experimentally validated through a digital signal processor (DSP-based controlled-boost DC-DC converter. The experimental results show that the proposed algorithm performs with a shorter tracking time, smaller output power oscillation, and higher efficiency, compared with the conventional P and O algorithm.
J. G. Dyke; Kleidon, A.
2010-01-01
The Maximum Entropy Production (MEP) principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the ...
A maximum entropy theorem with applications to the measurement of biodiversity
Leinster, Tom
2009-01-01
This is a preliminary article stating and proving a new maximum entropy theorem. The entropies that we consider can be used as measures of biodiversity. In that context, the question is: for a given collection of species, which frequency distribution(s) maximize the diversity? The theorem provides the answer. The chief surprise is that although we are dealing not just with a single entropy, but a one-parameter family of entropies, there is a single distribution maximizing all of them simultaneously.
Application of the maximum relative entropy method to the physics of ferromagnetic materials
Giffin, Adom; Cafaro, Carlo; Ali, Sean Alan
2016-08-01
It is known that the Maximum relative Entropy (MrE) method can be used to both update and approximate probability distributions functions in statistical inference problems. In this manuscript, we apply the MrE method to infer magnetic properties of ferromagnetic materials. In addition to comparing our approach to more traditional methodologies based upon the Ising model and Mean Field Theory, we also test the effectiveness of the MrE method on conventionally unexplored ferromagnetic materials with defects.
Quality Evaluation and Its Application to Surface Water Ecosystem Based on Maximum Flux Principle
刘年磊; 毛国柱; 赵林
2010-01-01
Based on the maximum flux principle(MFP),a water quality evaluation model for surface water ecosystem is presented by using self-organization map(SOM) neural network simulation algorithm from the aspect of systematic structural evolution.This evaluation model is applied to the case of surface water ecosystem in Xindu District of Chengdu City in China.The values reflecting the water quality of five cross-sections of the system at different developing stages are obtained,with stable values of 1.438,2.952,1.86...
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Alfafara, C G; Miura, K; Shimizu, H; Shioya, S; Suga, K; Suzuki, K
1993-02-20
A fuzzy logic controller (FLC) for the control of ethanol concentration was developed and utilized to realize the maximum production of glutathione (GSH) in yeast fedbatch culture. A conventional fuzzy controller, which uses the control error and its rate of change in the premise part of the linguistic rules, worked well when the initial error of ethanol concentration was small. However, when the initial error was large, controller overreaction resulted in an overshoot.An improved fuzzy controller was obtained to avoid controller overreaction by diagnostic determination of "glucose emergency states" (i.e., glucose accumulation or deficiency), and then appropriate emergency control action was obtained by the use of weight coefficients and modification of linguistic rules to decrease the overreaction of the controller when the fermentation was in the emergency state. The improved fuzzy controller was able to control a constant ethanol concentration under conditions of large initial error.The improved fuzzy control system was used in the GSH production phase of the optimal operation to indirectly control the specific growth rate mu to its critical value micro(c). In the GSH production phase of the fed-batch culture, the optimal solution was to control micro to micro(c) in order to maintain a maximum specific GSH production rate. The value of micro(c) also coincided with the critical specific growth rate at which no ethanol formation occurs. Therefore, the control of micro to micro(c) could be done indirectly by maintaining a constant ethanol concentration, that is, zero net ethanol formation, through proper manipulation of the glucose feed rate. Maximum production of GSH was realized using the developed FLC; maximum production was a consequence of the substrate feeding strategy and cysteine addition, and the FLC was a simple way to realize the strategy.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William
2016-04-19
To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.
Bajkova, Anisa T
2011-01-01
We propose the multi-frequency synthesis (MFS) algorithm with spectral correction of frequency-dependent source brightness distribution based on maximum entropy method. In order to take into account the spectral terms of n-th order in the Taylor expansion for the frequency-dependent brightness distribution, we use a generalized form of the maximum entropy method suitable for reconstruction of not only positive-definite functions, but also sign-variable ones. The proposed algorithm is aimed at producing both improved total intensity image and two-dimensional spectral index distribution over the source. We consider also the problem of frequency-dependent variation of the radio core positions of self-absorbed active galactic nuclei, which should be taken into account in a correct multi-frequency synthesis. First, the proposed MFS algorithm has been tested on simulated data and then applied to four-frequency synthesis imaging of the radio source 0954+658 from VLBA observational data obtained quasi-simultaneously ...
Yong, Zhengdong; Gong, Chengsheng; He, Sailing
2016-01-01
Plasmonics offer an exciting way to mediate the interaction between light and matter, allowing strong field enhancement and confinement, large absorption and scattering at resonance. However, simultaneous realization of ultra-narrow band perfect absorption and electromagnetic field enhancement is challenging due to the intrinsic high optical losses and radiative damping in metals. Here, we propose an all-metal plasmonic absorber with an absorption bandwidth less than 8nm and polarization insensitive absorptivity exceeding 99%. Unlike traditional Metal-Dielectric-Metal configurations, we demonstrate that the narrowband perfect absorption and field enhancement are ascribed to the vertical gap plasmonic mode in the deep subwavelength scale, which has a high quality factor of 120 and mode volume of about 10^-4*({\\lambda}/n)^3 . Based on the coupled mode theory, we verify that the diluted field enhancement is proportional to the absorption, and thus perfect absorption is critical to maximum field enhancement. In a...
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa
2014-04-01
In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.
Maximum-entropy weak lens reconstruction improved methods and application to data
Marshall, P J; Gull, S F; Bridle, S L
2002-01-01
We develop the maximum-entropy weak shear mass reconstruction method presented in earlier papers by taking each background galaxy image shape as an independent estimator of the reduced shear field and incorporating an intrinsic smoothness into the reconstruction. The characteristic length scale of this smoothing is determined by Bayesian methods. Within this algorithm the uncertainties due to the intrinsic distribution of galaxy shapes are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures can be calculated with corresponding uncertainties. We apply this method to two clusters taken from N-body simulations using mock observations corresponding to Keck LRIS and mosaiced HST WFPC2 fields. We demonstrate that the Bayesian choice of smoothing length is sensible and that masses within apertures (including one on a filamentary structure) are reliable. We apply the method to data taken on the cluster MS1054-03 using the Keck LRIS (Clowe et al. 2000) and HST (Hoekstra e...
Howell, L W
2002-01-01
The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate bro...
Jumper, John M; Sosnick, Tobin R
2016-01-01
To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a new methodology that computes a self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom, the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly; hence, the protein backbone dynamics traverse a greatly smoothed energetic landscape, resulting in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood type method to parameterize the side chain model using...
DONG Sheng; CHI Kun; ZHANG Qiyi; ZHANG Xiangdong
2012-01-01
Compared with traditional real-time forecasting,this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area.The GMM combines the Grey System and Markov theory into a higher precision model.The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values,and thus gives forecast results involving two aspects of information.The procedure for forecasting annul maximum water levels with the GMM contains five main steps:1) establish the GM (1,1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2,and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy.The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin,China are utilized to calibrate and verify the proposed model according to the above steps.Every 25 years' data are regarded as a hydro-sequence.Eight groups of simulated results show reasonable agreement between the predicted values and the measured data.The GMM is also applied to the 10 other hydrological stations in the same estuary.The forecast results for all of the hydrological stations are good or acceptable.The feasibility and effectiveness of this new forecasting model have been proved in this paper.
XU Fu-min; XUE Hong-chao
2004-01-01
The Maximum Entropy Principle (MEP) method is elaborated, and the corresponding probability density evaluation method for the random fluctuation system is introduced, the goal of the article is to find the best fitting method for the wave climate statistical distribution. For the first time, a kind of new maximum entropy probability distribution (MEP distribution) expression is deduced in accordance with the second order moment of a random process. Different from all the fitting methods in the past, the MEP distribution can describe the probability distribution of any random fluctuation system conveniently and reasonably. If the moments of the random signal is limited to the second order, that is, the ratio of the root-mean-square value to the mean value of the random variable is obtained from the random sample, the corresponding MEP distribution can be computed according to the deduced expression in this essay. The concept of the wave climate is introduced here, and the MEP distribution is applied to fit the probability density distributions of the significant wave height and spectral peak period. Take the Mexico Gulf as an example, three stations at different locations, depths and wind wave strengths are chosen in the half-closed gulf, the significant wave height and spectral peak period distributions at each station are fitted with the MEP distribution, the Weibull distribution and the Log-normal distribution respectively, the fitted results are compared with the field observations, the results show that the MEP distribution is the best fitting method, and the Weibull distribution is the worst one when applied to the significant wave height and spectral peak period distributions at different locations, water depths and wind wave strengths in the Gulf. The conclusion shows the feasibility and reasonability of fitting wave climate statistical distributions with the deduced MEP distributions in this essay, and furthermore proves the great potential of MEP method to
34 CFR 694.9 - What is the maximum indirect cost rate for an agency of a State or local government?
2010-07-01
... for an agency of a State or local government? Notwithstanding 34 CFR 75.560-75.562 and 34 CFR 80.22, the maximum indirect cost rate that an agency of a State or local government receiving funds under... a State or local government? 694.9 Section 694.9 Education Regulations of the Offices of...
Maximum Parsimony and the Skewness Test: A Simulation Study of the Limits of Applicability
Määttä, Jussi; Roos, Teemu
2016-01-01
The maximum parsimony (MP) method for inferring phylogenies is widely used, but little is known about its limitations in non-asymptotic situations. This study employs large-scale computations with simulated phylogenetic data to estimate the probability that MP succeeds in finding the true phylogeny for up to twelve taxa and 256 characters. The set of candidate phylogenies are taken to be unrooted binary trees; for each simulated data set, the tree lengths of all (2n − 5)!! candidates are computed to evaluate quantities related to the performance of MP, such as the probability of finding the true phylogeny, the probability that the tree with the shortest length is unique, the probability that the true phylogeny has the shortest tree length, and the expected inverse of the number of trees sharing the shortest length. The tree length distributions are also used to evaluate and extend the skewness test of Hillis for distinguishing between random and phylogenetic data. The results indicate, for example, that the critical point after which MP achieves a success probability of at least 0.9 is roughly around 128 characters. The skewness test is found to perform well on simulated data and the study extends its scope to up to twelve taxa. PMID:27035667
Maximum Likelihood Fitting of Tidal Streams With Application to the Sagittarius Dwarf Tidal Tails
Cole, Nathan; Magdon-Ismail, Malik; Desell, Travis; Dawsey, Kristopher; Hayashi, Warren; Xinyang,; Liu,; Purnell, Jonathan; Szymanski, Boleslaw; Varela, Carlos; Willett, Benjamin; Wisniewski, James
2008-01-01
We present a maximum likelihood method for determining the spatial properties of tidal debris and of the Galactic spheroid. With this method we characterize Sagittarius debris using stars with the colors of blue F turnoff stars in SDSS stripe 82. The debris is located at (alpha, delta, R) = (31.37 deg +/- 0.26 deg, 0.0 deg, 29.22 +/- 0.20 kpc), with a (spatial) direction given by the unit vector , in Galactocentric Cartesian coordinates, and with FWHM = 6.74 +/- 0.06 kpc. This 2.5 degee-wide stripe contains 0.892% as many F turnoff stars as the current Sagittarius dwarf galaxy. Over small spatial extent, the debris is modeled as a cylinder with a density that falls off as a Gaussian with distance from the axis, while the smooth component of the spheroid is modeled with a Hernquist profile. We assume that the absolute magnitude of F turnoff stars is distributed as a Gaussian, which is an improvement over previous methods which fixed the absolute magnitude at Mg0 = 4.2. The effectiveness and correctness of the ...
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
Application of a maximum likelihood algorithm to ultrasound modulated optical tomography.
Huynh, Nam T; He, Diwei; Hayes-Gill, Barrie R; Crowe, John A; Walker, John G; Mather, Melissa L; Rose, Felicity R A J; Parker, Nicholas G; Povey, Malcolm J W; Morgan, Stephen P
2012-02-01
In pulsed ultrasound modulated optical tomography (USMOT), an ultrasound (US) pulse performs as a scanning probe within the sample as it propagates, modulating the scattered light spatially distributed along its propagation axis. Detecting and processing the modulated signal can provide a 1-dimensional image along the US axis. A simple model is developed wherein the detected signal is modelled as a convolution of the US pulse and the properties (ultrasonic/optical) of the medium along the US axis. Based upon this model, a maximum likelihood (ML) method for image reconstruction is established. For the first time to our knowledge, the ML technique for an USMOT signal is investigated both theoretically and experimentally. The ML method inverts the data to retrieve the spatially varying properties of the sample along the US axis, and a signal proportional to the optical properties can be acquired. Simulated results show that the ML method can serve as a useful reconstruction tool for a pulsed USMOT signal even when the signal-to-noise ratio (SNR) is close to unity. Experimental data using 5 cm thick tissue phantoms (scattering coefficient μ(s) = 6.5 cm(-1), anisotropy factor g=0.93) demonstrate that the axial resolution is 160 μm and the lateral resolution is 600 μm using a 10 MHz transducer.
Lee, Sang-Yong; Ortega, Antonio
2000-04-01
We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.
Maximum power point tracking for photovoltaic applications by using two-level DC/DC boost converter
Moamaei, Parvin
Recently, photovoltaic (PV) generation is becoming increasingly popular in industrial applications. As a renewable and alternative source of energy they feature superior characteristics such as being clean and silent along with less maintenance problems compared to other sources of the energy. In PV generation, employing a Maximum Power Point Tracking (MPPT) method is essential to obtain the maximum available solar energy. Among several proposed MPPT techniques, the Perturbation and Observation (P&O;) and Model Predictive Control (MPC) methods are adopted in this work. The components of the MPPT control system which are P&O; and MPC algorithms, PV module and high gain DC-DC boost converter are simulated in MATLAB Simulink. They are evaluated theoretically under rapidly and slowly changing of solar irradiation and temperature and their performance is shown by the simulation results, finally a comprehensive comparison is presented.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Young chicken and squab slaughter... INSPECTION REGULATIONS Operating Procedures § 381.67 Young chicken and squab slaughter inspection rate... inspector per minute under the traditional inspection procedure for the different young chicken and...
Ostrovsky, Dmitry
2016-09-01
A new family of Barnes beta distributions on (0, ∞) is introduced and its infinite divisibility, moment determinacy, scaling, and factorization properties are established. The Morris integral probability distribution is constructed from Barnes beta distributions of types (1, 0) and (2, 2), and its moment determinacy and involution invariance properties are established. For application, the maximum distributions of the 2D gaussian free field on the unit interval and circle with a non-random logarithmic potential are conjecturally related to the critical Selberg and Morris integral probability distributions, respectively, and expressed in terms of sums of Barnes beta distributions of types (1, 0) and (2, 2).
Application of the Principle of Maximum Conformality to Top-Pair Production
Brodsky, Stanley J.; /SLAC; Wu, Xing-Gang; /SLAC /Chongqing U.
2013-05-13
A major contribution to the uncertainty of finite-order perturbative QCD predictions is the perceived ambiguity in setting the renormalization scale {mu}{sub r}. For example, by using the conventional way of setting {mu}{sub r} {element_of} [m{sub t}/2, 2m{sub t}], one obtains the total t{bar t} production cross-section {sigma}{sub t{bar t}} with the uncertainty {Delta}{sigma}{sub t{bar t}}/{sigma}{sub t{bar t}} {approx} (+3%/-4%) at the Tevatron and LHC even for the present NNLO level. The Principle of Maximum Conformality (PMC) eliminates the renormalization scale ambiguity in precision tests of Abelian QED and non-Abelian QCD theories. By using the PMC, all nonconformal {l_brace}{beta}{sub i}{r_brace}-terms in the perturbative expansion series are summed into the running coupling constant, and the resulting scale-fixed predictions are independent of the renormalization scheme. The correct scale-displacement between the arguments of different renormalization schemes is automatically set, and the number of active flavors n{sub f} in the {l_brace}{beta}{sub i}{r_brace}-function is correctly determined. The PMC is consistent with the renormalization group property that a physical result is independent of the renormalization scheme and the choice of the initial renormalization scale {mu}{sub r}{sup init}. The PMC scale {mu}{sub r}{sup PMC} is unambiguous at finite order. Any residual dependence on {mu}{sub r}{sup init} for a finite-order calculation will be highly suppressed since the unknown higher-order {l_brace}{beta}{sub i}{r_brace}-terms will be absorbed into the PMC scales higher-order perturbative terms. We find that such renormalization group invariance can be satisfied to high accuracy for {sigma}{sub t{bar t}} at the NNLO level. In this paper we apply PMC scale-setting to predict the t{bar t} cross-section {sigma}{sub t{bar t}} at the Tevatron and LHC colliders. It is found that {sigma}{sub t{bar t}} remains almost unchanged by varying {mu}{sub r}{sup init
Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si
2014-10-24
Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.
Angular Rate Sensor Joint Kinematics Applications
Gregory W. Hall
1997-01-01
Full Text Available High speed rotary motion of complex joints were quantified with triaxial angular rate sensors. Angular rate sensors were mounted to rigid links on either side of a joint to measure angular velocities about three orthogonal sensor axes. After collecting the data, the angular velocity vector of each sensor was transformed to local link axes and integrated to obtain the incremental change in angular position for each time step. Using the angular position time histories, a transformation matrix between the reference frame of each link was calculated. Incremental Eulerian rotations from the transformation matrix were calculated using an axis system defined for the joint. Summation of the incremental Eulerian rotations produced the angular position of the joint in terms of the standard axes. This procedure is illustrated by applying it to joint motion of the ankle, the spine, and the neck of crash dummies during impact tests. The methodology exhibited an accuracy of less than 5% error, improved flexibility over photographic techniques, and the ability to examine 3-dimensional motion.
[Heart rate variability. Applications in psychiatry].
Servant, D; Logier, R; Mouster, Y; Goudemand, M
2009-10-01
The autonomic nervous system sends messages through the sympathetic and parasympathetic nervous system. The sympathetic nervous system innervates the cardioaccelerating center of the heart, the lungs (increased ventilatory rhythm and dilatation of the bronchi) and the non-striated muscles (artery contraction). It releases adrenaline and noradrenaline. As opposed to the sympathetic nervous system, it innervates the cardiomoderator center of the heart, the lungs (slower ventilatory rhythm and contraction of the bronchi) and the non-striated muscles (artery dilatation). It uses acetylcholine (ACh) as its neurotransmitter. Sympathetic and parasympathetic divisions function antagonistically to preserve a dynamic modulation of vital functions. These systems act on the heart respectively through the stellar ganglion and the vagus nerve. The interaction of these messages towards the sinoauricular node is responsible for normal cardiac variability, which can be measured by monitoring heart rate variability (HRV). Heart rate is primarily controlled by vagal activity. Sensorial data coming from the heart are fed back to the central nervous system. HRV is an indicator of both how the central nervous system regulates the autonomic nervous system, and of how peripheral neurons feed information back to the central level. HRV measures are derived by estimating the variation among a set of temporally ordered interbeat intervals. The state of perfect symmetry, which, in medical parlance, is called respiratory sinus arrhythmia (RSA), can be described as a state of cardiac coherence. Obtaining a series of interbeat intervals requires a continuous measure of heart rate, typically electrocardiography (ECG). Commercially available software is then used to define the interbeat intervals within an ECG recording. The autonomic nervous system is highly adaptable and allows the organism to maintain its balance when experiencing strain or stress. Conversely, a lack of flexibility and a rigid
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2012-01-01
We analyze the relationship between maximum cluster mass, M_max, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H2) and star formation rate (Sigma_SFR) in the flocculent galaxy M33, using published gas data and a catalog of more than 600 young star clusters in its disk. By comparing the radial distributions of gas and most massive cluster masses, we find that M_max is proportional to Sigma_gas^4.7, M_max is proportional Sigma_H2^1.3, and M_max is proportional to Sigma_SFR^1.0. We rule out that these correlations result from the size of sample; hence, the change of the maximum cluster mass must be due to physical causes.
24 CFR 599.303 - Rating of applications.
2010-04-01
... the area poverty rate, area unemployment rate, and for urban areas, the percentage of families below... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Rating of applications. 599.303 Section 599.303 Housing and Urban Development Regulations Relating to Housing and Urban...
Lovell, Dale I; Cuneo, Ross; Gass, Greg C
2010-06-01
This study examined the effect of strength training (ST) and short-term detraining on maximum force and rate of force development (RFD) in previously sedentary, healthy older men. Twenty-four older men (70-80 years) were randomly assigned to a ST group (n = 12) and C group (control, n = 12). Training consisted of three sets of six to ten repetitions on an incline squat at 70-90% of one repetition maximum three times per week for 16 weeks followed by 4 weeks of detraining. Regional muscle mass was assessed before and after training by dual-energy X-ray absorptiometry. Training increased RFD, maximum bilateral isometric force, and force in 500 ms, upper leg muscle mass and strength above pre-training values (14, 25, 22, 7, 90%, respectively; P force and RFD of older men. However, older individuals may lose some neuromuscular performance after a period of short-term detraining and that resistance exercise should be performed on a regular basis to maintain training adaptations.
Modelling predation as a capped rate stochastic process, with applications to fish recruitment
James, Alex; Baxter, Paul D; Pitchford, Jonathan W
2005-01-01
Many mathematical models use functions the value of which cannot exceed some physically or biologically imposed maximum value. A model can be described as ‘capped-rate’ when the rate of change of a variable cannot exceed a maximum value. This presents no problem when the models are deterministic but, in many applications, results from deterministic models are at best misleading. The need to account for stochasticity, both demographic and environmental, in models is therefore important but, as this paper shows, incorporating stochasticity into capped-rate models is not trivial. A method using queueing theory is presented, which allows randomness and spatial heterogeneity to be incorporated rigorously into capped rate models. The method is applied to the feeding and growth of fish larvae. PMID:16849207
Malekifarsani, A; Skachek, M A
2009-10-01
shown that the concentrations of the following radionuclides are limited by solubility and precipitate around the waste and buffer: U, Np, Ra, Sm, Zr, Se, Tc, and Pd. The Sensitivity of maximum release rates in case precipitation shows that some nuclides such as Cs-135, Nb-94, Nb-93 m, Zr-93, Sn-126, Th-230, Pu-240, Pu-242, Pu-239, Cm-245, Am-243, Cm-245, U-233, Ac-227, Pb-210, Pa-231 and Th-229 are very little changed in case the maximum release rate from EBS corresponding to eliminate precipitation in buffer material. Some nuclides such as Se-79, Tc-99, Pd-107, Th-232, U-236, U-233, Ra-226, Np-237 U-235, U-234, and U-238 are virtually changed in the maximum release rate compared to case that taking account precipitation. In Sensitivity of maximum release rates in case to taking account stable isotopes (according to the table of inventory) there are only some nuclides with their stable isotopes in the vitrified waste. And calculation shows that Pd-107 and Se-79 are very increase in case eliminate stable isotope. The Sensitivity of maximum release rates in case retardation with sorption shows that some nuclides such as Pu-240, Pu-241, Pu-239, Cm-245, Am-241, Cm-246, and Am-243 are increased in some time in case maximum release rate from EBS corresponding to eliminate retardation in buffer material. Some nuclides such as U-235, U-233 and U-236 have a little decrease in case maximum release because their parents have short live and before decay to their daughter will released from the EBS. If the characteristic time taken for a nuclide to diffuse across the buffer exceeds its half-life, then the release rate of that nuclide from the EBS will be attenuated by radioactive decay. Thus, the retardation of the diffusion process due to sorption tends to reduce the release rates of short-lived nuclides more effectively than for the long-lived ones. For example, release rates of Pu-240, Cm-246 and Am-241, which are relatively short-lived and strongly sorbing, are very small
Luis Eduardo Cruz-Martínez
2014-10-01
Full Text Available Background. The formulas to predict maximum heart rate have been used for many years in different populations. Objective. To verify the significance and the association of formulas of Tanaka and 220-age when compared to real maximum heart rate. Materials and methods. 30 subjects -22 men, 8 women- between 18 and 30 years of age were evaluated on a cycle ergometer and their real MHR values were statistically compared with the values of formulas currently used to predict MHR. Results. The results demonstrate that both Tanaka p=0.0026 and 220-age p=0.000003 do not predict real MHR, nor does a linear association exist between them. Conclusions. Due to the overestimation with respect to real MHR value that these formulas make, we suggest a correction of 6 bpm to the final result. This value represents the median of the difference between the Tanaka value and the real MHR. Both Tanaka (r=0.272 and 220-age (r=0.276 are not adequate predictors of MHR during exercise at the elevation of Bogotá in subjects of 18 to 30 years of age, although more study with a larger sample size is suggested.
Shaw, A; Takács, I; Pagilla, K R; Murthy, S
2013-10-15
The Monod equation is often used to describe biological treatment processes and is the foundation for many activated sludge models. The Monod equation includes a "half-saturation coefficient" to describe the effect of substrate limitations on the process rate and it is customary to consider this parameter to be a constant for a given system. The purpose of this study was to develop a methodology, and its use to show that the half-saturation coefficient for denitrification is not constant but is in fact a function of the maximum denitrification rate. A 4-step procedure is developed to investigate the dependency of half-saturation coefficients on the maximum rate and two different models are used to describe this dependency: (a) an empirical linear model and (b) a deterministic model based on Fick's law of diffusion. Both models are proved better for describing denitrification kinetics than assuming a fixed K(NO3) at low nitrate concentrations. The empirical model is more utilitarian whereas the model based on Fick's law has a fundamental basis that enables the intrinsic K(NO3) to be estimated. In this study data was analyzed from 56 denitrification rate tests and it was found that the extant K(NO3) varied between 0.07 mgN/L and 1.47 mgN/L (5th and 95th percentile respectively) with an average of 0.47 mgN/L. In contrast to this, the intrinsic K(NO3) estimated for the diffusion model was 0.01 mgN/L which indicates that the extant K(NO3) is greatly influenced by, and mostly describes, diffusion limitations.
SANDA ROȘCA
2014-06-01
Full Text Available Application of Soil Loss Scenarios Using the ROMSEM Model Depending on Maximum Land Use Pretability Classes. A Case Study. Practicing a modern agriculture that takes into consideration the favourability conditions and the natural resources of a territory represents one of the main national objectives. Due to the importance of the agricultural land, which prevails among the land use types from the Niraj river basin, as well as the pedological and geomorphological characteristics, different areas where soil erosion is above the accepted thresholds were identified by applying the ROMSEM model. In order to do so, a GIS database was used, regrouping quantitative information regarding soil type, land use, climate and hydrogeology, used as indicators in the model. Estimations for the potential soil erosion have been made on the entire basin as well as on its subbasins. The essential role played by the morphometrical characteristics has also been highlighted (concavity, convexity, slope length etc.. Taking into account the strong agricultural characteristic of the analysed territory, the scoring method was employed for the identification of crop favourability in the case of wheat, barley, corn, sunflower, sugar beet, potato, soy and pea-bean. The results have been used as input data for the C coefficient (crop/vegetation and management factor in the ROMSEM model that was applied for the present land use conditions, as well as for other four scenarios depicting the land use types with maximum favourability. The theoretical, modelled values of the soil erosion were obtained dependent on land use, while the other variables of the model were kept constant.
Everaarts, A.P.; Willigen, de P.
1999-01-01
The effect of the rate and method of nitrogen application on nitrogen uptake and utilization by broccoli (Brassica oleracea var. italica) was studied in four field experiments. The methods of application were broadcast application vs band placement and split application. Maximum uptake of nitrogen b
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Circuit and interconnect design for high bit-rate applications
Veenstra, H.
2006-01-01
This thesis presents circuit and interconnect design techniques and design flows that address the most difficult and ill-defined aspects of the design of ICs for high bit-rate applications. Bottlenecks in interconnect design, circuit design and on-chip signal distribution for high bit-rate applicati
Experiences with leak rate calculations methods for LBB application
Grebner, H.; Kastner, W.; Hoefler, A.; Maussner, G. [and others
1997-04-01
In this paper, three leak rate computer programs for the application of leak before break analysis are described and compared. The programs are compared to each other and to results of an HDR Reactor experiment and two real crack cases. The programs analyzed are PIPELEAK, FLORA, and PICEP. Generally, the different leak rate models are in agreement. To obtain reasonable agreement between measured and calculated leak rates, it was necessary to also use data from detailed crack investigations.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Sustainable Use of Pesticide Applications in Citrus: A Support Tool for Volume Rate Adjustment
Cruz Garcerá
2017-06-01
Full Text Available Rational application of pesticides by properly adjusting the amount of product to the actual needs and specific conditions for application is a key factor for sustainable plant protection. However, current plant protection product (PPP labels registered for citrus in EU are usually expressed as concentration (%; rate/hl and/or as the maximum dose of product per unit of ground surface, without taking into account those conditions. In this work, the fundamentals of a support tool, called CitrusVol, developed to recommend mix volume rates in PPP applications in citrus orchards using airblast sprayers, are presented. This tool takes into consideration crop characteristics (geometry, leaf area density, pests, and product and application efficiency, and it is based on scientific data obtained previously regarding the minimum deposit required to achieve maximum efficacy, efficiency of airblast sprayers in citrus orchards, and characterization of the crop. The use of this tool in several commercial orchards allowed a reduction of the volume rate and the PPPs used in comparison with the commonly used by farmers of between 11% and 74%, with an average of 31%, without affecting the efficacy. CitrusVol is freely available on a website and in an app for smartphones.
Ma, Jingxing; Mungoni, Lucy Jubeki; Verstraete, Willy; Carballa, Marta
2009-07-01
The maximum propionic acid (HPr) removal rate (R(HPr)) was investigated in two lab-scale Upflow Anaerobic Sludge Bed (UASB) reactors. Two feeding strategies were applied by modifying the hydraulic retention time (HRT) in the UASB(HRT) and the influent HPr concentration in the UASB(HPr), respectively. The experiment was divided into three main phases: phase 1, influent with only HPr; phase 2, HPr with macro-nutrients supplementation and phase 3, HPr with macro- and micro-nutrients supplementation. During phase 1, the maximum R(HPr) achieved was less than 3 g HPr-CODL(-1)d(-1) in both reactors. However, the subsequent supplementation of macro- and micro-nutrients during phases 2 and 3 allowed to increase the R(HPr) up to 18.1 and 32.8 g HPr-CODL(-1)d(-1), respectively, corresponding with an HRT of 0.5h in the UASB(HRT) and an influent HPr concentration of 10.5 g HPr-CODL(-1) in the UASB(HPr). Therefore, the high operational capacity of these reactor systems, specifically converting HPr with high throughput and high influent HPr level, was demonstrated. Moreover, the presence of macro- and micro-nutrients is clearly essential for stable and high HPr removal in anaerobic digestion.
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2013-01-01
We analyze the relationship between maximum cluster mass, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H_2), neutral gas (Sigma_HI) and star formation rate (Sigma_SFR) in the grand design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. We find for clusters older than 25 Myr that M_3rd, the median of the 5 most massive clusters, is proportional to Sigma_HI^0.4. There is no correlation with Sigma_gas, Sigma_H2, or Sigma_SFR. For clusters younger than 10 Myr, M_3rd is proportional to Sigma_HI^0.6, M_3rd is proportional to Sigma_gas^0.5; there is no correlation with either Sigma_H_2 or Sigma_SFR. The results could hardly be more different than those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but M_3rd is proportional to Sigma_g...
Application of Kalman Filter on modelling interest rates
Long H. Vo
2014-03-01
Full Text Available This study aims to test the feasibility of using a data set of 90-day bank bill forward rates from the Australian market to predict spot interest rates. To achieve this goal I utilized the application of Kalman Filter in a state space model with time-varying state variable. It is documented that in the case of short-term interest rates,the state space model yields robust predictive power. In addition, this predictive power of implied forward rate is heavily impacted by the existence of a time-varying risk premium in the term structure.
Everaarts, A.P.; Willigen, de, P.
1999-01-01
The effect of the rate and method of nitrogen application on nitrogen uptake and utilization by broccoli (Brassica oleracea var. italica) was studied in four field experiments. The methods of application were broadcast application vs band placement and split application. Maximum uptake of nitrogen by the crop was around 300 kg ha-1. In one experiment, band placement positively influenced nitrogen uptake. Split application did not influence nitrogen uptake. Nitrogen application resulted in a h...
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
Jiang Zhu
2014-01-01
Full Text Available Some delta-nabla type maximum principles for second-order dynamic equations on time scales are proved. By using these maximum principles, the uniqueness theorems of the solutions, the approximation theorems of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear and nonlinear initial value problems and boundary value problems on time scales are proved, the oscillation of second-order mixed delat-nabla differential equations is discussed and, some maximum principles for second order mixed forward and backward difference dynamic system are proved.
Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun
2015-10-01
In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey's post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey's post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p = 0.000). Based on the results of
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Blok, Chris; Jackson, Brian E; Guo, Xianfeng; de Visser, Pieter H B; Marcelis, Leo F M
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of water and nutrients in water surpass those in substrate, even though the transport of oxygen may be more complex. Transport rates can only limit growth when they are below a rate corresponding to maximum plant uptake. Our first objective was to compare Chrysanthemum growth performance for three water-based growing systems with different irrigation. We compared; multi-point irrigation into a pond (DeepFlow); one-point irrigation resulting in a thin film of running water (NutrientFlow) and multi-point irrigation as droplets through air (Aeroponic). Second objective was to compare press pots as propagation medium with nutrient solution as propagation medium. The comparison included DeepFlow water-rooted cuttings with either the stem 1 cm into the nutrient solution or with the stem 1 cm above the nutrient solution. Measurements included fresh weight, dry weight, length, water supply, nutrient supply, and oxygen levels. To account for differences in radiation sum received, crop performance was evaluated with Radiation Use Efficiency (RUE) expressed as dry weight over sum of Photosynthetically Active Radiation. The reference, DeepFlow with substrate-based propagation, showed the highest RUE, even while the oxygen supply provided by irrigation was potentially growth limiting. DeepFlow with water-based propagation showed 15-17% lower RUEs than the reference. NutrientFlow showed 8% lower RUE than the reference, in combination with potentially limiting irrigation supply of nutrients and oxygen. Aeroponic showed RUE levels similar to the reference and Aeroponic had non-limiting irrigation supply of water, nutrients, and oxygen. Water-based propagation affected the subsequent
Romero, Claudia; Mesa, Duvan
2015-04-01
L-Moments Regional Frequency Analysis Methodology Application in maximum rainfall values over the Bogota River's basin 1°Claudia Patricia Romero Hernández; 2°Duvan Javier Mesa Fernández Universidad Santo Tomas; Colombia The application area of this methodology is the Bogota River's basin, which is located in Cundinamarca; a Colombian department with a total surface area of 589.143 hectares. This basin includes 19 sub-basins, and it is the most densely urbanized of the country. Including its metropolitan area, this region boasts a population of 9.000.000 inhabitants; which composes approximately 23% of Colombia's population and possesses around 19% of the country's industries. This basin has shown a notorious increase of complicated floods frequency in the last few years due to climatic variations. These climatic periods correspond to a weather pattern called Niña Phenomenon (2010-2011), which affected 57.000 citizens in this department and 4,900 people directly in Bogota city, with an estimated economic damage of 277'121,052 USD. The Regional Frequency Analysis methodology is a statistics procedure that consists in adding information from multiple samples in a single large sample, assuming previously that all of these come from the same probability model, except for a difference between them due to a scale factor. These samples are defined by a "regionalization" procedure known as the "Avenue Index" or "Flood Index". This procedure groups several kinds of information that comes from a common probability model, such as temperature, rainfall, and water flow. This model must be similar for all of the weather stations located in a homogeneous region. Maps for each of 4 return periods (5, 10, 50 and 100 years) were developed based on 120 weather stations located on this basin. The information used in this process comes from median monthly rainfall data, based on historical series between 30 and 40 years average. An increase in the annual median rainfall was
Walker, Anthony P.; Quaife, Tristan; Van Bodegom, Peter M.; De Kauwe, Martin G.; Keenan, Trevor F.; Joiner, Joanna; Lomas, Mark R.; MacBean, Natasha; Xu, Chongang; Yang, Xiaojuan;
2017-01-01
The maximum photosynthetic carboxylation rate (V (sub cmax)) is an influential plant trait that has multiple scaling hypotheses, which is a source of uncertainty in predictive understanding of global gross primary production (GPP). Four trait-scaling hypotheses (plant functional type, nutrient limitation, environmental filtering, and plant plasticity) with nine specific implementations were used to predict global V(sub cmax) distributions and their impact on global GPP in the Sheffield Dynamic Global Vegetation Model (SDGVM). Global GPP varied from 108.1 to 128.2 petagrams of Carbon (PgC) per year, 65 percent of the range of a recent model intercomparison of global GPP. The variation in GPP propagated through to a 27percent coefficient of variation in net biome productivity (NBP). All hypotheses produced global GPP that was highly correlated (r equals 0.85-0.91) with three proxies of global GPP. Plant functional type-based nutrient limitation, underpinned by a core SDGVM hypothesis that plant nitrogen (N) status is inversely related to increasing costs of N acquisition with increasing soil carbon, adequately reproduced global GPP distributions. Further improvement could be achieved with accurate representation of water sensitivity and agriculture in SDGVM. Mismatch between environmental filtering (the most data-driven hypothesis) and GPP suggested that greater effort is needed understand V(sub cmax) variation in the field, particularly in northern latitudes.
Estimation of autotrophic maximum specific growth rate constant--experience from the long-term operation of a laboratory-scale sequencing batch reactor system.
Su, Yu-min; Makinia, Jacek; Pagilla, Krishna R
2008-04-01
The autotrophic maximum specific growth rate constant, muA,max, is the critical parameter for design and performance of nitrifying activated sludge systems. In literature reviews (i.e., Henze et al., 1987; Metcalf and Eddy, 1991), a wide range of muA,max values have been reported (0.25 to 3.0 days(-1)); however, recent data from several wastewater treatment plants across North America revealed that the estimated muA,max values remained in the narrow range 0.85 to 1.05 days(-1). In this study, long-term operation of a laboratory-scale sequencing batch reactor system was investigated for estimating this coefficient according to the low food-to-microorganism ratio bioassay and simulation methods, as recommended in the Water Environment Research Foundation (Alexandria, Virginia) report (Melcer et al., 2003). The estimated muA,max values using steady-state model calculations for four operating periods ranged from 0.83 to 0.99 day(-1). The International Water Association (London, United Kingdom) Activated Sludge Model No. 1 (ASM1) dynamic model simulations revealed that a single value of muA,max (1.2 days(-1)) could be used, despite variations in the measured specific nitrification rates. However, the average muA,max was gradually decreasing during the activated sludge chlorination tests, until it reached the value of 0.48 day(-1) at the dose of 5 mg chlorine/(g mixed liquor suspended solids x d). Significant discrepancies between the predicted XA/YA ratios were observed. In some cases, the ASM1 predictions were approximately two times higher than the steady-state model predictions. This implies that estimating this ratio from a complex activated sludge model and using it in simple steady-state model calculations should be accepted with great caution and requires further investigation.
Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.
1981-01-01
A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.
Rius, Jordi
2006-09-01
The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].
Gonzalez-Lopezlira, Rosa A. [On sabbatical leave from the Centro de Radioastronomia y Astrofisica, UNAM, Campus Morelia, Michoacan, C.P. 58089, Mexico. (Mexico); Pflamm-Altenburg, Jan; Kroupa, Pavel, E-mail: r.gonzalez@crya.unam.mx [Argelander Institut fuer Astronomie, Universitaet Bonn, Auf dem Huegel 71, D-53121 Bonn (Germany)
2013-06-20
We analyze the relationship between maximum cluster mass and surface densities of total gas ({Sigma}{sub gas}), molecular gas ({Sigma}{sub H{sub 2}}), neutral gas ({Sigma}{sub H{sub I}}), and star formation rate ({Sigma}{sub SFR}) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.4{+-}0.2}}, whereM{sub 3rd} is the median of the five most massive clusters. There is no correlation with{Sigma}{sub gas},{Sigma}{sub H2}, or{Sigma}{sub SFR}. For clusters younger than 10 Myr, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.6{+-}0.1}} and M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 0.5{+-}0.2}; there is no correlation with either {Sigma}{sub H{sub 2}} or{Sigma}{sub SFR}. The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 3.8{+-}0.3}, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub 2}{sup 1.2{+-}0.1}}, and M{sub 3rd}{proportional_to}{Sigma}{sub SFR}{sup 0.9{+-}0.1}. For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Hubbard, S. M.; Coutts, D. S.; Matthews, W.; Guest, B.; Bain, H.
2015-12-01
In basins adjacent to continually active arcs, detrital zircon geochronology can be used to establish a high-resolution chronostratigraphic framework for deep-time strata. Large-nU-Pb geochronological datasets can yield a statistically significant signature from the youngest sub-population of detrital zircons, which we deduce from maximum depositional age (MDA) calculations. MDA is determined through numerous methods such as the mean age of three or more overlapping grain ages at 2σ error, favored in this analysis. Positive identification of the youngest detrital zircon population in a rock is the limiting factor on precision and resolution. The Campanian-Paleogene Nanaimo Group of B.C., Canada, was deposited in a forearc basin, outboard of the Coast Mountain Batholith. The record of a deep-water sediment-routing system is exhumed at Denman and Hornby islands; sandstone- and conglomerate- dominated strata compose a composite sedimentary unit 20 km across and 1.5 km thick, in strike section. Volcanic ashes are absent from the succession, which has been constrained biostratigraphically. Eleven detrital zircon samples are analyzed to define stratigraphic architecture and provide insight into sedimentation rates. Our dataset (n=3081) constrains the overall duration of channelization to ~18 Ma. A series of at least five distinct composite channel fills 3-6 km wide and 400-600 m thick are identified. The MDA of these units are statistically distinct and constrained to better than 3% precision. Sedimentation rates amongst the channel fills increase upward, from 60-100 m/Ma to >500 m/Ma. This is likely linked to the tendency of a slope channel system to be dominated by sediment bypass early in its evolution, and later dominated by aggradation as large-scale levees develop. Channel processes were not continuous, with the longest hiatus ~6 Ma. The large-n detrital zircon dataset provides unprecedented insight into long-term sediment routing, evidence for which is
Murata, T; Sato, T; Nakamura, S X
2016-01-01
The maximum entropy method is examined as a new tool for solving the ill-posed inversion problem involved in the Lorentz integral transformation (LIT) method. As an example, we apply the method to the spin-dipole strength function of 4He. We show that the method can be successfully used for inversion of LIT, provided the LIT function is available with a sufficient accuracy.
Rousseau, Alain N.; Klein, Iris M.; Freudiger, Daphné; Gagnon, Patrick; Frigon, Anne; Ratté-Fortin, Claudie
2014-11-01
Climate change (CC) needs to be accounted for in the estimation of probable maximum floods (PMFs). However, there does not exist a unique way to estimate PMFs and, furthermore the challenge in estimating them is that they should neither be underestimated for safety reasons nor overestimated for economical ones. By estimating PMFs without accounting for CC, the risk of underestimation could be high for Quebec, Canada, since future climate simulations indicate that in all likelihood extreme precipitation events will intensify. In this paper, simulation outputs from the Canadian Regional Climate Model (CRCM) are used to develop a methodology to estimate probable maximum precipitations (PMPs) while accounting for changing climate conditions for the southern region of the Province of Quebec, Canada. The Kénogami and Yamaska watersheds are herein of particular interest, since dam failures could lead to major downstream impacts. Precipitable water (w) represents one of the key variables in the estimation process of PMPs. Results of stationary tests indicate that CC will not only affect precipitation and temperature but also the monthly maximum precipitable water, wmax, and the ensuing maximization ratio used for the estimation of PMPs. An up-to-date computational method is developed to maximize w using a non-stationary frequency analysis, and then calculate the maximization ratios. The ratios estimated this way are deemed reliable since they rarely exceed threshold values set for Quebec, and, therefore, provide consistent PMP estimates. The results show an overall significant increase of the PMPs throughout the current century compared to the recent past.
Lorenz, Ralph D
2010-05-12
The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.
Pal, R; Das, P; Chakrabarti, K; Chakraborty, A; Chowdhury, A
2006-01-01
The degradative characteristics of butachlor (N-Butoxymethyl-2-chloro-2',6'-diethyla- cetanilide) were studied under controlled laboratory conditions in clay loam alluvial (AL) soil (Typic udifluvent) and coastal saline (CS) soil (Typic endoaquept) from rice cultivated fields. The application rates included field rate (FR), 2-times FR (2FR) and 10-times FR (10FR). The incubation study was carried out at 30 degrees C with and without decomposed cow manure (DCM) at 60% of maximum water holding capacity (WHC) and waterlogged soil condition. The half-life values depended on the soil types and initial concentrations of butachlor. Butachlor degraded faster in AL soil and in soil amended with DCM under waterlogged condition. Microbial degradation is the major avenue of butachlor degradation from soils.
Rate Constant Calculation for Thermal Reactions Methods and Applications
DaCosta, Herbert
2011-01-01
Providing an overview of the latest computational approaches to estimate rate constants for thermal reactions, this book addresses the theories behind various first-principle and approximation methods that have emerged in the last twenty years with validation examples. It presents in-depth applications of those theories to a wide range of basic and applied research areas. When doing modeling and simulation of chemical reactions (as in many other cases), one often has to compromise between higher-accuracy/higher-precision approaches (which are usually time-consuming) and approximate/lower-preci
Zagrouba, M.; Sellami, A.; Bouaicha, M. [Laboratoire de Photovoltaique, des Semi-conducteurs et des Nanostructures, Centre de Recherches et des Technologies de l' Energie, Technopole de Borj-Cedria, Tunis, B.P. 95, 2050 Hammam-Lif (Tunisia); Ksouri, M. [Unite de Recherche RME-Groupe AIA, Institut National des Sciences Appliquees et de Technologie (Tunisia)
2010-05-15
In this paper, we propose to perform a numerical technique based on genetic algorithms (GAs) to identify the electrical parameters (I{sub s}, I{sub ph}, R{sub s}, R{sub sh}, and n) of photovoltaic (PV) solar cells and modules. These parameters were used to determine the corresponding maximum power point (MPP) from the illuminated current-voltage (I-V) characteristic. The one diode type approach is used to model the AM1.5 I-V characteristic of the solar cell. To extract electrical parameters, the approach is formulated as a non convex optimization problem. The GAs approach was used as a numerical technique in order to overcome problems involved in the local minima in the case of non convex optimization criteria. Compared to other methods, we find that the GAs is a very efficient technique to estimate the electrical parameters of PV solar cells and modules. Indeed, the race of the algorithm stopped after five generations in the case of PV solar cells and seven generations in the case of PV modules. The identified parameters are then used to extract the maximum power working points for both cell and module. (author)
James O Lloyd-Smith
Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Al-Amoudi, A.; Zhang, L. [University of Leeds (United Kingdom). School of Electronic and Electrical Engineering
2000-09-01
A neural-network-based approach for solar array modelling is presented. The logic hidden unit of the proposed network consists of a set of nonlinear radial basis functions (RBFs) which are connected directly to the input vector. The links between hidden and output units are linear. The model can be trained using a random set of data collected from a real photovoltaic (PV) plant. The training procedures are fast and the accuracy of the trained models is comparable with that of the conventional model. The principle and training procedures of the RBF-network modelling when applied to emulate the I/V characteristics of PV arrays are discussed. Simulation results of the trained RBF networks for modelling a PV array and predicting the maximum power points of a real PV panel are presented. (author)
Lithium manganese spinel materials for high-rate electrochemical applications
Anna V. Potapenko; Sviatoslav A. Kirillov
2014-01-01
In order to successively compete with supercapacitors, an ability of fast discharge is a must for lithium-ion batteries. From this point of view, stoichiometric and substituted lithium manganese spinels as cathode materials are one of the most prospective candidates, especially in their nanosized form. In this article, an overview of the most recent data regarding physico-chemical and electrochemical properties of lithium manganese spinels, especially, LiMn2O4 and LiNi0.5Mn1.5O4, synthesized by means of various methods is presented, with special emphasis of their use in high-rate electrochemical applications. In particular, specific capacities and rate capabilities of spinel materials are analyzed. It is suggested that reduced specific capacity is determined primarily by the aggregation of material particles, whereas good high-rate capability is governed not only by the size of crystallites but also by the perfectness of crystals. The most technologically advantageous solutions are described, existing gaps in the knowledge of spinel materials are outlined, and the ways of their filling are suggested, in a hope to be helpful in keeping lithium batteries afloat in the struggle for a worthy place among electrochemical energy systems of the 21st century.
Bhuiyan, M. A. E.; Wanik, D. W.; Scerbo, D.; Anagnostou, E. N.
2015-12-01
We have developed a tool, the Convection Risk Index (CRI), to represent the severity, timing and location of convection for select geographic areas. The CRI is calculated from the Convection Risk Matrix (CRM), a tabulation of numerous meteorological parameters which are categorized into four broad factors that contribute to convection (surface and lower level moisture, atmospheric instability, vertical wind shear, and lift); each of these factors have historically been utilized by meteorologists to predict the likelihood for development of thunderstorms. The CRM ascribes a specific threshold value to each parameter in such a way that it creates a unique tool used to calculate the risk for seeing the development of thunderstorms. The parameters were combined using a weighted formula and which when calculated, yields the Convection Risk Index 1 to 4 scale, with 4 being the highest risk for seeing strong convection. In addition, we also evaluated the performance of the parameters in the CRM and CRI for predicting the maximum wind speed in areas where we calculated the CRI using nonparametric tree-based model, Bayesian additive trees (BART). The use of the CRI and the predicted wind speeds from BART can be used to better inform emergency preparedness efforts in government and industry.We have developed a tool, the Convection Risk Index (CRI), to represent the severity, timing and location of convection for select geographic areas. The CRI is calculated from the Convection Risk Matrix (CRM), a tabulation of numerous meteorological parameters which are categorized into four broad factors that contribute to convection (surface and lower level moisture, atmospheric instability, vertical wind shear, and lift); each of these factors have historically been utilized by meteorologists to predict the likelihood for development of thunderstorms. The CRM ascribes a specific threshold value to each parameter in such a way that it creates a unique tool used to calculate the risk for
Explore the Application of Financial Engineering in the Management of Exchange Rate Risk
Yang Liu
2015-01-01
Full Text Available In the background where the domestic enterprises commonly have a weak protection consciousness against the exchange rate risk, this article makes a deep analysis based on the definition of exchange rate risk and its cause. By comparison of the traditional management method of exchange rate risk with another one based on financial engineering tools, it also deeply analyzes the method to use the financial engineering technology in the management of exchange rate risk, and concludes the primary purpose of exchange rate risk management is for hedging. This article proposes an optimal analysis method in two aspects, namely the minimum risk and maximum efficiency, for the forward-based optimal hedging, and proposes an optimal analysis method of dynamic hedging for the optimal hedging of option-based tools. Based on the description of the application of financial tools in foreign exchange futures, forward contract, currency exchange and foreign exchange option, it makes an empirical analysis on the management of foreign exchange risk by taking an assumed T company as the carrier and based on the trading tools of forward foreign exchange and currency option, which describes the operation procedure of financial tools in a more direct way and proves the efficiency of the optimal analysis method of this article.
Wang, Sheng-Quan; Brodsky, Stanley J; Mojaza, Matin
2016-01-01
We present improved pQCD predictions for Higgs boson hadroproduction at the Large Hadronic Collider (LHC) by applying the Principle of Maximum Conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale $\\mu_r$ of the QCD coupling $\\alpha_s(\\mu_r)$ is the Higgs mass, and then to vary this choice over the range $1/2 m_H < \\mu_r < 2 m_H $ in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the non-conformal $\\beta$ terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where pQCD series has large higher order contributions, as is the case for Higgs boson hadroproduction. Furthermor...
Ying-Yi Hong
2016-01-01
Full Text Available This work proposes an enhanced particle swarm optimization scheme that improves upon the performance of the standard particle swarm optimization algorithm. The proposed algorithm is based on chaos search to solve the problems of stagnation, which is the problem of being trapped in a local optimum and with the risk of premature convergence. Type 1′′ constriction is incorporated to help strengthen the stability and quality of convergence, and adaptive learning coefficients are utilized to intensify the exploitation and exploration search characteristics of the algorithm. Several well known benchmark functions are operated to verify the effectiveness of the proposed method. The test performance of the proposed method is compared with those of other popular population-based algorithms in the literature. Simulation results clearly demonstrate that the proposed method exhibits faster convergence, escapes local minima, and avoids premature convergence and stagnation in a high-dimensional problem space. The validity of the proposed PSO algorithm is demonstrated using a fuzzy logic-based maximum power point tracking control model for a standalone solar photovoltaic system.
de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L
2010-08-01
States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.
周良明; 郭佩芳; 王强; 杜伊
2004-01-01
Based on the maximum entropy principle, a probability density function (PDF) is derived for the distribution of wave heights in a random wave field, without any more hypothesis. The present PDF, being a non-Rayleigh form, involves two parameters: the average wave height H and the state parameter γ. The role of γ in the distribution of wave heights is examined. It is found that γ may be a certain measure of sea state. A least square method for determining γ from measured data is proposed. In virtue of the method, the values of γ are determined for three sea states from the data measured in the East China Sea. The present PDF is compared with the well known Rayleigh PDF of wave height and it is shown that it much better fits the data than the Rayleigh PDF. It is expected that the present PDF would fit some other wave variables, since its derivation is not restricted only to the wave height.
无
2010-01-01
A new noise reduction method for nonlinear signal based on maximum variance unfolding(MVU)is proposed.The noisy sig- nal is firstly embedded into a high-dimensional phase space based on phase space reconstruction theory,and then the manifold learning algorithm MVU is used to perform nonlinear dimensionality reduction on the data of phase space in order to separate low-dimensional manifold representing the attractor from noise subspace.Finally,the noise-reduced signal is obtained through reconstructing the low-dimensional manifold.The simulation results of Lorenz system show that the proposed MVU-based noise reduction method outperforms the KPCA-based method and has the advantages of simple parameter estimation and low parameter sensitivity.The proposed method is applied to fault detection of a vibration signal from rotor-stator of aero engine with slight rubbing fault.The denoised results show that the slight rubbing features overwhelmed by noise can be effectively extracted by the proposed noise reduction method.
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-01-01
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-04-20
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.
N. Alavizadeh
2017-01-01
Full Text Available ims: Apelin is an adipokine, which secreted from adipose tissue and has positive effects against the insulin resistance. The aim of this study was to investigate the effect of 8-week aerobic exercise on levels of apelin and insulin resistance index in sedentary men. Materials & Methods: In this semi-experimental study with controlled group pre/post-test design in 2015, 27 healthy sedentary men living in Mashhad City, Iran, were selected by convenience sampling method. They were divided into two groups; experimental group (n=14 and control group (n=13. In the trained group, the volunteers participated in 8 weeks aerobic exercise, 3 days/week (equivalent to 75-85% of maximum oxygen consumption for 60 minutes per session. The research variables were assessed before and after the intervention in both groups. The collected data were analyzed using SPSS 20 software using paired and independent sample T tests. Findings: 8-week aerobic exercise significantly decreased the weight, BMI and apelin, insulin and insulin resistance index levels and increased the maximum oxygen consumption in experimental group sedentary men (p<0.05. Moreover, there were significant differences in levels of FBS, insulin, apelin, insulin resistance index and maximum oxygen consumption between experimental and control groups (p<0.05. Conclusion: 8-week aerobic exercise reduces apelin levels and insulin resistance index in sedentary men.
Stochastic homogenization of rate-independent systems and applications
Heida, Martin
2017-05-01
We study the stochastic and periodic homogenization 1-homogeneous convex functionals. We prove some convergence results with respect to stochastic two-scale convergence, which are related to classical Γ -convergence results. The main result is a general \\liminf -estimate for a sequence of 1-homogeneous functionals and a two-scale stability result for sequences of convex sets. We apply our results to the homogenization of rate-independent systems with 1-homogeneous dissipation potentials and quadratic energies. In these applications, both the energy and the dissipation potential have an underlying stochastic microscopic structure. We study the particular homogenization problems of Prandtl-Reuss plasticity, Tresca friction on a macroscopic surface and Tresca friction on microscopic fissures.
39 CFR 3015.3 - Decrease in rates of general applicability.
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Decrease in rates of general applicability. 3015.3... PRODUCTS § 3015.3 Decrease in rates of general applicability. (a) When the Postal Service determines to change a rate or rates of general applicability for any competitive product that results in a decrease in...
Response of Tomato on Calcareous Soils to Different Seedbed Phosphorus Application Rates
ZHANG Xiao-Sheng; LIAO Hong; CHEN Qing; P. CHRISTIE; LI Xiao-Lin; ZHANG Fu-Suo
2007-01-01
Field experiments were conducted with five rates (0, 75, 150, 225, and 450 kg P2O5 ha-1) of seedbed P fertilizer application to investigate the yield of tomato in response to fertilizer P rate on calcareous soils with widely different levels of Olsen P (13-142 mg kg-1) at 15 sites in some suburban counties of Beijing in 1999. Under the condition of no P fertilizer application, tomato yield generally increased with an increase in soil test P levels, and the agronomic level for soil testing P measured with Olsen method was 50 or 82 mg kg-1 soil to achieve 85% or 95% of maximum tomato yield, respectively. With regard to marketable yield, in the fields where Olsen-P levels were ＜ 50 mg kg-1, noticeable responses to applied P were observed. On the basis of a linear plateau regression, the optimum seedbed P application rate in the P-insufficient fields was 125 kg P2O5 ha-1 or about 1.5-2 times the P removal from harvested tomato plants. In contrast, in fields with moderate (50 ＜ Olsen P ＜ 90 mg kg-1) or high (Olsen P ＞ 90 mg kg-1) available P, there was no marked effect on tomato fruit yield. Field survey data indicated that in most fields with conventional P management, a P surplus typically occurred. Thus, once the soil test P level reached the optimum for crop yield, it was recommended that P fertilizer application be restricted or eliminated to minimize negative environmental effects.
Probabilistic pipe fracture evaluations for leak-rate-detection applications
Rahman, S.; Ghadiali, N.; Paul, D.; Wilkowski, G. [Battelle, Columbus, OH (United States)
1995-04-01
Regulatory Guide 1.45, {open_quotes}Reactor Coolant Pressure Boundary Leakage Detection Systems,{close_quotes} was published by the U.S. Nuclear Regulatory Commission (NRC) in May 1973, and provides guidance on leak detection methods and system requirements for Light Water Reactors. Additionally, leak detection limits are specified in plant Technical Specifications and are different for Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). These leak detection limits are also used in leak-before-break evaluations performed in accordance with Draft Standard Review Plan, Section 3.6.3, {open_quotes}Leak Before Break Evaluation Procedures{close_quotes} where a margin of 10 on the leak detection limit is used in determining the crack size considered in subsequent fracture analyses. This study was requested by the NRC to: (1) evaluate the conditional failure probability for BWR and PWR piping for pipes that were leaking at the allowable leak detection limit, and (2) evaluate the margin of 10 to determine if it was unnecessarily large. A probabilistic approach was undertaken to conduct fracture evaluations of circumferentially cracked pipes for leak-rate-detection applications. Sixteen nuclear piping systems in BWR and PWR plants were analyzed to evaluate conditional failure probability and effects of crack-morphology variability on the current margins used in leak rate detection for leak-before-break.
Wang, Sheng-Quan; Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin
2016-09-09
We present improved perturbative QCD (pQCD) predictions for Higgs boson hadroproduction at the LHC by applying the principle of maximum conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale μ r of the QCD coupling α s ( μ r ) is the Higgs mass and then to vary this choice over the range 1 / 2 m H < μ r < 2 m H in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the nonconformal β terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where a pQCD series has large higher-order contributions, as is the case for Higgs boson hadroproduction. Furthermore, this ad hoc choice of scale and range gives pQCD predictions which depend on the renormalization scheme being used, in contradiction to basic RG principles. In contrast, after applying the PMC, we obtain next-to-next-to-leading-order RG resummed pQCD predictions for Higgs boson hadroproduction which are renormalization-scheme independent and have minimal sensitivity to the choice of the initial renormalization scale. Taking m H = 125 GeV , the PMC predictions for the p p → H X Higgs inclusive hadroproduction cross sections for various LHC center-of-mass energies are σ Incl | 7 TeV = 21.2 1 + 1.36 - 1.32 pb , σ Incl | 8 TeV = 27.3 7 + 1.65 - 1.59 pb , and σ Incl | 13 TeV = 65.7 2 + 3.46 - 3.0 pb . We also predict the fiducial cross section σ fid ( p p → H → γ γ ) : σ fid | 7 TeV = 30.1 + 2.3 - 2.2 fb , σ fid | 8 TeV = 38.3 + 2.9 - 2.8 fb , and σ fid | 13 TeV = 85.8 + 5.7 - 5.3 fb . The error limits in these predictions include the small residual high
Blok, Chris; Jackson, Brian E.; Guo, Xianfeng; Visser, De Pieter H.B.; Marcelis, Leo F.M.
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of
A Novel Dual-Rate Sampling Switched-Capacitor Configuration and Its Application
YU Qi; YANG Mohua; CHENG Yu; WANG Xiangzhan; LIU Changxiao; LAN Jialong
2003-01-01
In order to realize accurate bilinear transformation from s- to z-domain, a novel switched- capacitor configuration is proposed in the light of principles of dual-rate sampling and charge conservation, which has also been used for building a 5th-order elliptic lowpass filter. The filter is simulated and measured in typical 0.34 μm/3.3 V Si CMOS process models, special full differential operational amplifiers and CMOS transfer gate switches, which achieves 80 MHz sampling rate, 17.8MHz cutoff frequency, 0.052 dB maximum passband ripple, 42.1 dB minimum stopband attenuation and 74 mW quiescent power dissipation. At the same time, the dual-rate sampling topology breaks the traditional restrictions of filter introduced by unit-gain bandwidth and slew rate of operational amplifiers and also improves effectively their performances in high-frequency applications. It has been applied for the design of an anti-alias filter in analog front-end of video decoder IC with 15 MHz signal frequency yet.
Bush, R. T.; McInerney, F. A.; Baczynski, A. A.; Wing, S. L.
2011-12-01
Long chain n-alkanes (C21-C35) are well-known as biomarkers of terrestrial plants. They can be preserved across a wide range of terrestrial and marine environments, survive in the sedimentary record for millions of years, and can serve as proxies for ancient environments. Most n-alkane records are derived from sediments rather than directly from fossil leaves. However, little is known about the fidelity of the n-alkane record: how and where leaf preservation relates to n-alkane preservation and how patterns of n-alkane carbon isotope ratios (δ13C) compare to living relatives. To examine these questions, we analyzed n-alkanes from fluvial sediments and individual leaf fossils collected in the Bighorn Basin, Wyoming, across the Paleocene-Eocene Thermal Maximum (PETM) carbon isotope excursion. We assessed the fidelity of the n-alkane signature from individual fossil leaves via three separate means. 1) Spatial variations were assessed by comparing n-alkane concentrations on a fossil leaf and in sediments both directly adjacent to the leaf and farther away. Absolute concentrations were greater within the compression fossil than in the directly adjacent sediment, which were in turn greater than in more distant sediment. 2) n-Alkane abundances and distributions were examined in fossil leaves having a range of preservational quality, from fossils with intact cuticle to carbonized fossils lacking cuticle and higher-order venation. The best preserved fossils preserved a higher concentration of n-alkanes and showed the most similar n-alkane distribution to living relatives. However, a strong odd over even predominance suggests a relatively unmodified plant source occurred in all samples regardless of preservation state. 3) n-Alkane δ13C values were measured for both fossil leaves and their living relatives. Both the saw-tooth pattern of δ13C values between odd and even chain lengths and the general decrease in δ13C values with increasing chain length are consistent with
Weiser, Deborah Anne
Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.
Sander, Pia; Mouritsen, L; Andersen, J Thorup
2002-01-01
OBJECTIVE: The aim of this study was to evaluate the value of routine measurements of urinary flow rate and residual urine volume as a part of a "minimal care" assessment programme for women with urinary incontinence in detecting clinical significant bladder emptying problems. MATERIAL AND METHOD...... female urinary incontinence. Thus, primary health care providers can assess women based on simple guidelines without expensive equipment for assessment of urine flow rate and residual urine....
Schiefelbein, Sarah; Fröhlich, Alexander; John, Gernot T; Beutler, Falco; Wittmann, Christoph; Becker, Judith
2013-08-01
Dissolved oxygen plays an essential role in aerobic cultivation especially due to its low solubility. Under unfavorable conditions of mixing and vessel geometry it can become limiting. This, however, is difficult to predict and thus the right choice for an optimal experimental set-up is challenging. To overcome this, we developed a method which allows a robust prediction of the dissolved oxygen concentration during aerobic growth. This integrates newly established mathematical correlations for the determination of the volumetric gas-liquid mass transfer coefficient (kLa) in disposable shake-flasks from the filling volume, the vessel size and the agitation speed. Tested for the industrial production organism Corynebacterium glutamicum, this enabled a reliable design of culture conditions and allowed to predict the maximum possible cell concentration without oxygen limitation.
Strasser, Barbara; Schwarz, Joachim; Haber, Paul; Schobersberger, Wolfgang
2011-12-01
Aim of this study was to evaluate reliable guide values for heart rate (HF) and blood pressure (RR) with reference to defined sub maximum exertion considering age, gender and body mass. One hundred and eighteen healthy but non-trained subjects (38 women, 80 men) were included in the study. For interpretation, finally facts of 28 women and 59 men were used. We found gender differences for HF and RR. Further, we noted significant correlations between HF and age as well as between RR and body mass at all exercise levels. We established formulas for gender-specific calculation of reliable guide values for HF and RR on sub maximum exercise levels.
Effects of Nitrogen Rates and Application Method on Grain Yield and Yield
Sh Babazadeh
2012-06-01
Full Text Available Proper application of N fertilizer and its optimization for increasing the economic yield of rice is definitely important. In order to determine the best N application method and amount according to growth stages of hybrid rice, an experiment was carried out at experimental farm of RRII in a factorial experiment based on a randomized complete block design with 3 replications. The treatments included 6 application methods as follow: total nitrogen at transplanting stage ,50% at transplanting stage +50% at early tillering , 50% at transplanting stage +50% at panicle initiation , 50% at transplanting stage +25% at maximum tillering +25% at booting ,34% at transplanting stage + 33% at early tillering + 33% at booting and 70% at transplanting + 30% at panicle initiation. 3 levels of nitrogen (90,120 and 150 kg/ha from urea source were also used. Recorded traits were grain yield and yield component. Results showed the significant interactions between split methods and N rates on yield, flag leaf area, filled and unfilled grain number per panicle and percentage fertility (p
Video rate color region segmentation for mobile robotic applications
de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline
2005-08-01
Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.
Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.
49 CFR 260.25 - Additional information for Applicants not having a credit rating.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Additional information for Applicants not having a... Financial Assistance § 260.25 Additional information for Applicants not having a credit rating. Each application submitted by Applicants not having a recent credit rating from one or more nationally...
Xu, Xiaohong; Chen, Yu; Jia, Haiwei
2009-07-01
The paper study the relation between Interest rate and Inflation rate, we use the Stepwise Regression Method to build the math model about the relation between Interest rate and Inflation rate. And the model has passed the significance test, and we use the model to discuss the influence on social economy through adjust Deposit rate, so we can provide a lot of theory proof for government to draw policy.
Blazevich, Anthony J; Horne, Sara; Cannavan, Dale
2008-01-01
knee extension training was performed 3 x week(-1) for 10 weeks. Maximal isometric strength (+11.2%) and RFD (measured from 0-30/50/100/200 ms, respectively; +10.5%-20.5%) increased after 10 weeks (P training mode. Peak EMG amplitude and rate of EMG rise......This study examined the effects of slow-speed resistance training involving concentric (CON, n = 10) versus eccentric (ECC, n = 11) single-joint muscle contractions on contractile rate of force development (RFD) and neuromuscular activity (EMG), and its maintenance through detraining. Isokinetic...... were not significantly altered with training or detraining. Subjects with below-median normalized RFD (RFD/MVC) at 0 weeks significantly increased RFD after 5- and 10-weeks training, which was associated with increased neuromuscular activity. Subjects who maintained their higher RFD after detraining...
Thornley, John H M; Parsons, Anthony J
2014-02-07
Treating resource allocation within plants, and between plants and associated organisms, is essential for plant, crop and ecosystem modelling. However, it is still an unresolved issue. It is also important to consider quantitatively when it is efficient and to what extent a plant can invest profitably in a mycorrhizal association. A teleonomic model is used to address these issues. A six state-variable model giving exponential growth is constructed. This represents carbon (C), nitrogen (N) and phosphorus (P) substrates with structure in shoot, root and mycorrhiza. The shoot is responsible for uptake of substrate C, the root for substrates N and P, and the mycorrhiza also for substrates N and P. A teleonomic goal, maximizing proportional growth rate, is solved analytically for the allocation fractions. Expressions allocating new dry matter to shoot, root and mycorrhiza are derived which maximize growth rate. These demonstrate several key intuitive phenomena concerning resource sharing between plant components and associated mycorrhizae. For instance, if root uptake rate for phosphorus is equal to that achievable by mycorrhiza and without detriment to root uptake rate for nitrogen, then this gives a faster growing mycorrhizal-free plant. However, if root phosphorus uptake is below that achievable by mycorrhiza, then a mycorrhizal association may be a preferred strategy. The approach offers a methodology for introducing resource sharing between species into ecosystem models. Applying teleonomy may provide a valuable short-term means of modelling allocation, avoiding the circularity of empirical models, and circumventing the complexities and uncertainties inherent in mechanistic approaches. However it is subjective and brings certain irreducible difficulties with it.
Denisov, S. L.; Korolkov, A. I.
2017-07-01
A study of the phenomenon of diffraction of acoustic waves in application to the task of noise shielding by the method of maximum length sequences has been carried out. Rectangular plates and an aircraft model of integrated layout are used as the screens. In the study of noise shielding by aircraft model, the theorem of reciprocity is used. A comparison of experimental results with calculations performed in the framework of the geometrical theory of diffraction (GTD) is performed. On the basis of calculations, the identification of the contributions from different areas of the shielding surface in the full acoustic field is carried out. For the aircraft model, the shielding factor is calculated depending on the frequency.
Dai, Teng-fei; Xi, Ben-ye; Yan, Xiao-li; Jia, Li-ming
2015-06-01
A field experiment was conducted to investigate the effects of fertilization methods, i.e., drip (DF) and furrow fertilization (GF), and nitrogen (N) application rates (25, 50, 75 g N · plant(-1) · time(-1)) on the dynamics of soil N vertical migration in a Populus x euramericana cv. 'Guariento' plantation. The results showed that soil NH4(+)-N and NO3(-)-N contents decreased with the increasing soil depth under different fertilization methods and N application rates. In the DF treatment, soil NH4(+)-N and NO3(-)-N were mainly concentrated in the 0-40 cm soil layer, and their contents ascended firstly and then descended, reaching their maximum values at the 5th day (211.1 mg · kg(-1)) and 10th day (128.8 mg · kg(-1)) after fertilization, respectively. In the GF treatment, soil NH4(+)-N and NO3(-)-N were mainly concentrated in the 0-20 cm layer, and the content of soil NO3(-)-N rose gradually and reached its maximum at the 20th day (175.7 mg · kg(-1)) after fertilization, while the NH4(+)-N content did not change significantly after fertilization. Overall, N fertilizer had an effect within 20 days in the DF treatment, and more than 20 days in the GF treatment. In the DF treatment, the content and migration depth of soil NH4(+)-N and NO3(-)-N increased with the N application rate. In the GF treatment, the NO3(-)-N content increased with the N application rate, but the NH4(+)-N content was not influenced. Under the DF treatment, the hydrolysis rate, nitrification rate and migration depth of urea were higher or larger than that under the GF treatment, and more N accumulated in deep soil as the N application rate increased. Considering the distribution characteristics of fine roots and soil N, DF would be a better fertilization method in P. xeuramericana cv. 'Guariento' plantation, since it could supply N to larger distribution area of fine roots. When the N application rate was 50 g · tree(-1) each time, nitrogen mainly distributed in the zone of fine roots and
Clavery Tungaraza
2003-09-01
Full Text Available The influence of bacterial activities on inorganic nutrients has always affected total phytoplankton uptake rates owing to the absence of a reliable method that can exclude these effects. The use of natural samples to determine the contribution of bacterial activities has been based on the size fractionation method which, unfortunately, is encumbered with uncertainties, especially because of the size overlap between bacteria and phytoplankton communities. In this paper, the results are reported of an estimation of bacterial activities by the use of inhibitors (antibiotics. It was shown that the contribution of bacterial activities to the uptake of nitrogenous nutrients was highest for ammonium (79%, followed by nitrate (72% and urea (62%. In a second set of experiments the concentration of ammonium was raised by 5 µM. This was done to avoid nutrient limitation resulting from the absence of recycled nutrients following the addition of antibiotics and the maximum contribution of bacterial activity to the uptake rate of ammonium increased to 87%. It can be concluded that the use of inhibitors is a good method, a reliable alternative to the fractionation method. However, it is important to note that inhibitors can affect both phytoplankton growth and the nutrient recycling process. Our results indicate that the application of antibiotics had measurable effects not only on the target bacteria but also on the uptake behaviour of phytoplankton. Our observations were therefore limited to the period when there was no effect on the phytoplankton, as was demonstrated by a carbon protein incorporation experiment.
Kemmler, Wolfgang; Schliffka, Rebecca; Mayhew, Jerry L; von Stengel, Simon
2010-07-01
We evaluated the effect of whole-body electromyostimulation (WB-EMS) during dynamic exercises over 14 weeks on anthropometric, physiological, and muscular parameters in postmenopausal women. Thirty women (64.5 +/- 5.5 years) with experience in physical training (>3 years) were randomly assigned either to a control group (CON, n = 15) that maintained their general training program (2 x 60 min.wk of endurance and dynamic strength exercise) or to an electromyostimulation group (WB-EMS, n = 15) that additionally performed a 20-minute WB-EMS training (2 x 20 min.10 d). Resting metabolic rate (RMR) determined from spirometry was selected to indicate muscle mass. In addition, body circumferences, subcutaneous skinfolds, strength, power, and dropout and adherence values. Resting metabolic rate was maintained in WB-EMS (-0.1 +/- 4.8 kcal.h) and decreased in CON (-3.2+/-5.2 kcal.h, p = 0.038); although group differences were not significant (p = 0.095), there was a moderately strong effect size (ES = 0.62). Sum of skinfolds (28.6%) and waist circumference (22.3%) significantly decreased in WB-EMS whereas both parameters (1.4 and 0.1%, respectively) increased in CON (p = 0.001, ES = 1.37 and 1.64, respectively), whereas both parameters increased in CON (1.4 and 0.1%, respectively). Isometric strength changes of the trunk extensors and leg extensors differed significantly (p < or = 0.006) between WB-EMS and CON (9.9% vs. -6.4%, ES = 1.53; 9.6% vs. -4.5%, ES = 1.43, respectively). In summary, adjunct WB-EMS training significantly exceeds the effect of isolated endurance and resistance type exercise on fitness and fatness parameters. Further, we conclude that for elderly subjects unable or unwilling to perform dynamic strength exercises, electromyostimulation may be a smooth alternative to maintain lean body mass, strength, and power.
Al-Quwaiee, Hessa
2016-01-07
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Eduardo Marcel Fernandes Nascimento
2011-08-01
Full Text Available The objective of this study was to analyze the heart rate (HR profile plotted against incremental workloads (IWL during a treadmill test using three mathematical models [linear, linear with 2 segments (Lin2, and sigmoidal], and to determine the best model for the identification of the HR threshold that could be used as a predictor of ventilatory thresholds (VT1 and VT2. Twenty-two men underwent a treadmill incremental test (retest group: n=12 at an initial speed of 5.5 km.h-1, with increments of 0.5 km.h-1 at 1-min intervals until exhaustion. HR and gas exchange were continuously measured and subsequently converted to 5-s and 20-s averages, respectively. The best model was chosen based on residual sum of squares and mean square error. The HR/IWL ratio was better fitted with the Lin2 model in the test and retest groups (p0.05. During a treadmill incremental test, the HR/IWL ratio seems to be better fitted with a Lin2 model, which permits to determine the HR threshold that coincides with VT1.
Piao, Daqing; Holyoak, G Reed; Patel, Sanjay
2016-01-01
We demonstrate a laparoscopic applicator probe and a method thereof for real-time en-face topographic mapping of near-surface heterogeneity for potential use in intraoperative margin assessment during minimally invasive oncological procedures. The probe fits in a 12mm port and houses at its maximum 128 copper-coated 750um fibers that form radially alternating illumination (70 fibers) and detection (58 fibers) channels. By simultaneously illuminating the 70 source channels of the probe that is in contact with a scattering medium and concurrently measuring the light diffusely propagated to the 58 detector channels, the presence of near-surface optical heterogeneities can be resolved in an en-face 9.5mm field-of-view in real-time. Visualization of a subsurface margin of strong attenuation contrast at a depth up to 3mm is demonstrated at one wavelength at a frame rate of 1.25Hz.
Denzer-Lippmann, Melanie Y.; Bachlechner, Stephan; Wielopolski, Jan; Fischer, Marie; Buettner, Andrea; Doerfler, Arndt; Schöfl, Christof; Münch, Gerald; Kornhuber, Johannes; Thürauf, Norbert
2017-01-01
Stomach distension and energy per time are factors influencing satiety. Moreover, different rates of nutrient intake induce different stomach distension. The goal of our studies was to elucidate the influence of different oral rates of nutrient intake (normal rate versus slow intervalled rate; study I) and intravenous low rate macronutrient application (protein, carbohydrate, fat) or placebo (study II) on psychophysical function. The pilot studies investigated the effects of 1) study I: a mixed nutrient solution (1/3 protein, 1/3 fat, 1/3 carbohydrates) 2) study II: intravenous macronutrient infusions (protein, carbohydrate, fat) or placebo on psychophysical function (mood, hunger, food craving, alertness, smell intensity ratings and hedonic ratings) in human subjects. In study I 10 male subjects (age range: 21–30 years) completed the study protocol participating in both test conditions and in study II 20 male subjects (age range: 19–41 years) completed the study protocol participating in all test conditions. Additionally, metabolic function was analyzed and cognitive and olfactory tests were conducted twice starting 100 min before the beginning of the intervention and 240 min after. Psychophysical (mood, hunger, fat-, protein-, carbohydrate-, sweets- and vegetable-craving), alertness and metabolic function tests were performed seven times on each examination day. Greater effects on hunger and food cravings were observed for normal rate of intake compared to slow intervalled rate of intake and intravenous low rate macronutrient application. Our findings potentially confirm that volume of the food ingested and a higher rate of energy per time contribute to satiety during normal rate of food intake, while slow intervalled rate of food intake and intravenous low rate macronutrient application showed no effects on satiation. Our results motivate the view that a certain amount of volume of the food ingested and a certain energy per time ratio are necessary to reduce
Melanie Y. Denzer-Lippmann
2017-06-01
Full Text Available Stomach distension and energy per time are factors influencing satiety. Moreover, different rates of nutrient intake induce different stomach distension. The goal of our studies was to elucidate the influence of different oral rates of nutrient intake (normal rate versus slow intervalled rate; study I and intravenous low rate macronutrient application (protein, carbohydrate, fat or placebo (study II on psychophysical function. The pilot studies investigated the effects of 1 study I: a mixed nutrient solution (1/3 protein, 1/3 fat, 1/3 carbohydrates 2 study II: intravenous macronutrient infusions (protein, carbohydrate, fat or placebo on psychophysical function (mood, hunger, food craving, alertness, smell intensity ratings and hedonic ratings in human subjects. In study I 10 male subjects (age range: 21–30 years completed the study protocol participating in both test conditions and in study II 20 male subjects (age range: 19–41 years completed the study protocol participating in all test conditions. Additionally, metabolic function was analyzed and cognitive and olfactory tests were conducted twice starting 100 min before the beginning of the intervention and 240 min after. Psychophysical (mood, hunger, fat-, protein-, carbohydrate-, sweets- and vegetable-craving, alertness and metabolic function tests were performed seven times on each examination day. Greater effects on hunger and food cravings were observed for normal rate of intake compared to slow intervalled rate of intake and intravenous low rate macronutrient application. Our findings potentially confirm that volume of the food ingested and a higher rate of energy per time contribute to satiety during normal rate of food intake, while slow intervalled rate of food intake and intravenous low rate macronutrient application showed no effects on satiation. Our results motivate the view that a certain amount of volume of the food ingested and a certain energy per time ratio are necessary
Gomez-Paccard, Miriam; Osete, Maria Luisa; Chauvin, Annick; Pérez-Asensio, Manuel; Jimenez-Castillo, Pedro
2014-05-01
Available European data indicate that during the past 2500 years there have been periods of rapid intensity geomagnetic fluctuations interspersed with periods of little change. The challenge now is to precisely describe these rapid changes. Due to the difficulty to obtain precisely dated heated materials to obtain a high-resolution description of past geomagnetic field intensity changes, new high-quality archeomagnetic data from archeological heated materials founded in well-defined superposed stratigraphic units are particularly valuable. In this work we report the archeomagnetic study of several groups of ceramic fragments from southeastern Spain that belong to 14 superposed stratigraphic levels corresponding to a surface no bigger than 3 m by 7 m. Between four and eight ceramic fragments were selected per stratigraphic unit. The age of the pottery fragments range from the second half of the 7th to the11th centuries. The dates were established by three radiocarbon dates and by archeological/historical constraints including typological comparisons and well-controlled stratigraphic constrains.Between two and four specimens per pottery fragment were studied. The classical Thellier and Thellier method including pTRM checks and TRM anisotropy and cooling rate corrections was used to estimate paleointensities at specimen level. All accepted results correspond to well-defined single components of magnetization going toward the origin and to high-quality paleointensity determinations. From these experiments nine new high-quality mean intensities have been obtained. The new data provide an improved description of the sharp abrupt intensity changes that took place in this region between the 7th and the 11th centuries. The results confirm that several rapid intensity changes (of about ~15-20 µT/century) took place in Western Europe during the recent history of the Earth.
Francis, Dawn L
2011-03-01
The adenoma detection rate (ADR) is a quality benchmark for colonoscopy. Many practices find it difficult to determine the ADR because it requires a combination of endoscopic and histologic findings. It may be possible to apply a conversion factor to estimate the ADR from the polyp detection rate (PDR).
Can mock interviewers' personalities influence their personality ratings of applicants?
Hilliard, Thomas; Macan, Therese
2009-03-01
The authors examined individual difference and self-regulatory variables to understand how an interviewer rates a candidate's personality. Participants were undergraduate students at a large midwestern university in the United States who completed measures of individual differences, read an employment interview transcript involving a candidate applying for a customer service job, and rated the candidate's personality. Participants' agreeableness, social skills, and communion striving were positively associated with their ratings of the candidate's helpfulness and obedience. The authors provide a foundation for further research on interviewer effectiveness and the processes underlying the employment interview.
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models
Nortey, Ezekiel NN; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, t...
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models
Nortey, Ezekiel NN; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, t...
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Säwén, Elin; Massad, Tariq; Landersjö, Clas; Damberg, Peter; Widmalm, Göran
2010-08-21
The conformational space available to the flexible molecule α-D-Manp-(1-->2)-α-D-Manp-OMe, a model for the α-(1-->2)-linked mannose disaccharide in N- or O-linked glycoproteins, is determined using experimental data and molecular simulation combined with a maximum entropy approach that leads to a converged population distribution utilizing different input information. A database survey of the Protein Data Bank where structures having the constituent disaccharide were retrieved resulted in an ensemble with >200 structures. Subsequent filtering removed erroneous structures and gave the database (DB) ensemble having three classes of mannose-containing compounds, viz., N- and O-linked structures, and ligands to proteins. A molecular dynamics (MD) simulation of the disaccharide revealed a two-state equilibrium with a major and a minor conformational state, i.e., the MD ensemble. These two different conformation ensembles of the disaccharide were compared to measured experimental spectroscopic data for the molecule in water solution. However, neither of the two populations were compatible with experimental data from optical rotation, NMR (1)H,(1)H cross-relaxation rates as well as homo- and heteronuclear (3)J couplings. The conformational distributions were subsequently used as background information to generate priors that were used in a maximum entropy analysis. The resulting posteriors, i.e., the population distributions after the application of the maximum entropy analysis, still showed notable deviations that were not anticipated based on the prior information. Therefore, reparameterization of homo- and heteronuclear Karplus relationships for the glycosidic torsion angles Φ and Ψ were carried out in which the importance of electronegative substituents on the coupling pathway was deemed essential resulting in four derived equations, two (3)J(COCC) and two (3)J(COCH) being different for the Φ and Ψ torsions, respectively. These Karplus relationships are denoted
Application of semiclassical methods to reaction rate theory
Hernandez, R.
1993-11-01
This work is concerned with the development of approximate methods to describe relatively large chemical systems. This effort has been divided into two primary directions: First, we have extended and applied a semiclassical transition state theory (SCTST) originally proposed by Miller to obtain microcanonical and canonical (thermal) rates for chemical reactions described by a nonseparable Hamiltonian, i.e. most reactions. Second, we have developed a method to describe the fluctuations of decay rates of individual energy states from the average RRKM rate in systems where the direct calculation of individual rates would be impossible. Combined with the semiclassical theory this latter effort has provided a direct comparison to the experimental results of Moore and coworkers. In SCTST, the Hamiltonian is expanded about the barrier and the ``good`` action-angle variables are obtained perturbatively; a WKB analysis of the effectively one-dimensional reactive direction then provides the transmission probabilities. The advantages of this local approximate treatment are that it includes tunneling effects and anharmonicity, and it systematically provides a multi-dimensional dividing surface in phase space. The SCTST thermal rate expression has been reformulated providing increased numerical efficiency (as compared to a naive Boltzmann average), an appealing link to conventional transition state theory (involving a ``prereactive`` partition function depending on the action of the reactive mode), and the ability to go beyond the perturbative approximation.
Weissbart, Steven J; Kim, Soo Jeong; Feinn, Richard S; Stock, Jeffrey A
2015-03-01
There has been an increase in the number of applications medical students have submitted for the National Residency Matching Program (NRMP). These additional applications are associated with significant costs and may contribute to match inefficiency. We explored if match rates improved in years when an increased number of applications were submitted. We analyzed yearly published data from the NRMP and the Electronic Residency Application Service for 13 specialties. A generalized linear model was used to assess the relationship between the annual match rate and the mean number of applications submitted per applicant, while controlling for the number of positions available and the number of applicants in the given year. Over the last 13 years there has been an increase in the mean number of applications submitted per applicant (P .05). There was no improvement in the match rate in years when medical students submitted an increased number of applications. Therefore, it would appear that the applicants do not benefit from the larger number of applications submitted. Further study is required to assess the cost and benefit of these additional applications.
章顺虎; 赵德文; 陈晓东
2015-01-01
In order to overcome the nonlinearity of Mises criterion, a new linear yield criterion with a dodecagon shape of the same perimeter as Mises criterion was derived by means of geometrical analysis. Its specific plastic work rate expressed as a linear function of the yield stress, the maximum and minimum strains was also deduced and compared with that of Mises criterion. The physical meaning of the proposed yield criterion is that yielding of materials begins when the shear yield stress τs reaches the magnitude of 0.594σs. By introducing the Lode parameter, validation of evolution expressions of the proposed yield criterion with those based on Tresca, Mises and TSS criteria as well as available classical yield experimental results of various metals shows that the present results intersect with Mises results and coincide well with experimental data. Moreover, further application to the limit analysis of circle plate as an example is performed to demonstrate the effectiveness of the proposed yield criterion, and the subsequent comparison of limit loads with the Tresca analytical solutions and Mises numerical results shows that the present results are higher than the Tresca analytical results, and are in good agreement with the Mises numerical results.
Rate Compatible Protocol for Information Reconciliation: An application to QKD
Elkouss, David; Lancho, Daniel; Martin, Vicente
2010-01-01
Information Reconciliation is a mechanism that allows to weed out the discrepancies between two correlated variables. It is an essential component in every key agreement protocol where the key has to be transmitted through a noisy channel. The typical case is in the satellite scenario described by Maurer in the early 90's. Recently the need has arisen in relation with Quantum Key Distribution (QKD) protocols, where it is very important not to reveal unnecessary information in order to maximize the shared key length. In this paper we present an information reconciliation protocol based on a rate compatible construction of Low Density Parity Check codes. Our protocol improves the efficiency of the reconciliation for the whole range of error rates in the discrete variable QKD context. Its adaptability together with its low interactivity makes it specially well suited for QKD reconciliation.
Jirasek, A [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Matthews, Q [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Hilts, M [Medical Physics, BC Cancer Agency-Vancouver Island Centre, Victoria BC V8R 6V5 (Canada); Schulze, G [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Blades, M W [Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Turner, R F B [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver BC V6T 1Z4 (Canada)
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Jirasek, A; Matthews, Q; Hilts, M; Schulze, G; Blades, M W; Turner, R F B
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Performance evaluation of a newly developed variable rate sprayer for nursery liner applications
An experimental variable-rate sprayer designed for liner applications was tested by comparing its spray deposit, coverage, and droplet density inside canopies of six nursery liner varieties with constant-rate applications. Spray samplers, including water sensitive papers (WSP) and nylon screens, wer...
Medium Repetition Rate TEA Laser For Industrial Applications
Walter, Bruno
1987-09-01
The design and performance of an inexpensive compact repetitively pulsed TEA CO2 laser is described. The device uses a modified corona preionization technique and a fast transverse gas flow to achieve high repetition rates. An output energy of 500 mJ per pulse and an out-put power of 6.2W at 40Hz have been obtained. Due to the small energy needed for preionization, the efficiency of the device is high, whereas the gas dissociation is low when compared with commercial laser systems. This results in the relatively small fresh laser gas exchange of 20 ltr h-1 for long term operation.
Safety Characteristics in System Application Software for Human Rated Exploration
Mango, E. J.
2016-01-01
NASA and its industry and international partners are embarking on a bold and inspiring development effort to design and build an exploration class space system. The space system is made up of the Orion system, the Space Launch System (SLS) and the Ground Systems Development and Operations (GSDO) system. All are highly coupled together and dependent on each other for the combined safety of the space system. A key area of system safety focus needs to be in the ground and flight application software system (GFAS). In the development, certification and operations of GFAS, there are a series of safety characteristics that define the approach to ensure mission success. This paper will explore and examine the safety characteristics of the GFAS development.
Grain, milling, and head rice yields as affected by nitrogen rate and bio-fertilizer application
Saeed FIROUZI
2015-11-01
Full Text Available To evaluate the effects of nitrogen rate and bio-fertilizer application on grain, milling, and head rice yields, a field experiment was conducted at Rice Research Station of Tonekabon, Iran, in 2013. The experimental design was a factorial treatment arrangement in a randomized complete block with three replicates. Factors were three N rates (0, 75, and 150 kg ha-1 and two bio-fertilizer applications (inoculation and uninoculation with Nitroxin, a liquid bio-fertilizer containing Azospirillum spp. and Azotobacter spp. bacteria. Analysis of variance showed that rice grain yield, panicle number per m2, grain number per panicle, flag leaves area, biological yield, grains N concentration and uptake, grain protein concentration, and head rice yield were significantly affected by N rate, while bio-fertilizer application had significant effect on rice grain yield, grain number per panicle, flag leaves area, biological yield, harvest index, grains N concentration and uptake, and grain protein concentration. Results showed that regardless of bio-fertilizer application, rice grain and biological yields were significantly increased as N application rate increased from 0 to 75 kg ha-1, but did not significantly increase at the higher N rate (150 kg ha-1. Grain yield was significantly increased following bio-fertilizer application when averaged across N rates. Grains N concentration and uptake were significantly increased as N rate increased up to 75 kg ha-1, but further increases in N rate had no significant effect on these traits. Bio-fertilizer application increased significantly grains N concentration and uptake, when averaged across N rates. Regardless of bio-fertilizer application, head rice yield was significantly increased from 56 % to 60 % when N rate increased from 0 to 150 kg ha-1. Therefore, this experiment illustrated that rice grain and head yields increased with increasing N rate, while bio-fertilizer application increased only rice grain
An integrated CMOS high data rate transceiver for video applications
Yaping, Liang; Dazhi, Che; Cheng, Liang; Lingling, Sun
2012-07-01
This paper presents a 5 GHz CMOS radio frequency (RF) transceiver built with 0.18 μm RF-CMOS technology by using a proprietary protocol, which combines the new IEEE 802.11n features such as multiple-in multiple-out (MIMO) technology with other wireless technologies to provide high data rate robust real-time high definition television (HDTV) distribution within a home environment. The RF frequencies cover from 4.9 to 5.9 GHz: the industrial, scientific and medical (ISM) band. Each RF channel bandwidth is 20 MHz. The transceiver utilizes a direct up transmitter and low-IF receiver architecture. A dual-quadrature direct up conversion mixer is used that achieves better than 35 dB image rejection without any on chip calibration. The measurement shows a 6 dB typical receiver noise figure and a better than 33 dB transmitter error vector magnitude (EVM) at -3 dBm output power.
An integrated CMOS high data rate transceiver for video applications
Liang Yaping; Che Dazhi; Liang Cheng; Sun Lingling
2012-01-01
This paper presents a 5 GHz CMOS radio frequency (RF) transceiver built with 0.18 μm RF-CMOS technology hy using a proprietary protocol,which combines the new IEEE 802.11n features such as multiplein multiple-out (MIMO) technology with other wireless technologies to provide high data rate robust real-time high definition television (HDTV) distribution within a home environment.The RF frequencies cover from 4.9 to 5.9 GHz:the industrial,scientific and medical (ISM) band.Each RF channel bandwidth is 20 MHz.The transceiver utilizes a direct up transmitter and low-IF receiver architecture.A dual-quadrature direct up conversion mixer is used that achieves better than 35 dB image rejection without any on chip calibration.The measurement shows a 6 dB typical receiver noise figure and a better than 33 dB transmitter error vector magnitude (EVM) at -3 dBm output power.
邓春亮; 胡南辉
2012-01-01
在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性，并得到其收敛于真值βo的速度为Op（λn^-1/2），其中λn（λ^-n）为方阵Sn=n∑i=1XiX^11的最小（最大）特征值．%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els （GLMs）. Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp（λn^-1/2）,λn（^λn） which denotes the smallest （Maximum）eigervalue of the matrixSn =n∑i=1XiX^11,
房祥忠; 陈家鼎
2011-01-01
强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
Marcelo Vicensi
2016-01-01
Full Text Available ABSTRACT Applications of phosphogypsum (PG provide nutrients to the soil and reduce Al3+ activity, favoring soil fertility and root growth, but allow Mg2+ mobilization through the soil profile, resulting in variations in the PG rate required to achieve the optimum crop yield. This study evaluated the effect of application rates and splitting of PG on soil fertility of a Typic Hapludox, as well as the influence on annual crops under no-tillage. Using a (4 × 3 + 1 factorial structure, the treatments consisted of four PG rates (3, 6, 9, and 12 Mg ha-1 and three split applications (P1 = 100 % in 2009; P2 = 50+50 % in 2009 and 2010; P3 = 33+33+33 % in 2009, 2010 and 2011, plus a control without PG. The soil was sampled six months after the last PG application, in stratified layers to a depth of 0.8 m. Corn, wheat and soybean were sown between November 2011 and December 2012, and leaf samples were collected for analysis when at least 50 % of the plants showed reproductive structures. The application of PG increased Ca2+ concentrations in all sampled soil layers and the soil pH between 0.2 and 0.8 m, and reduced the concentrations of Al3+ in all layers and of Mg2+ to a depth of 0.6 m, without any effect of splitting the applications. The soil Ca/Mg ratio increased linearly to a depth of 0.6 m with the rates and were found to be higher in the 0.0-0.1 m layer of the P2 and P3 treatments than without splitting (P1. Sulfur concentrations increased linearly by application rates to a depth of 0.8 m, decreasing in the order P3>P2>P1 to a depth of 0.4 m and were higher in the treatments P3 and P2 than P1 between 0.4-0.6 m, whereas no differences were observed in the 0.6-0.8 m layer. No effect was recorded for K, P and potential acidity (H+Al. The leaf Ca and S concentration increased, while Mg decreased for all crops treated with PG, and there was no effect of splitting the application. The yield response of corn to PG rates was quadratic, with the maximum
Evaluation and refinement of sprinkler application rate models used in frost protection
Perry, K.B.
1979-01-01
Two models of the sprinkled orchard which predict the application rates required for successful frost protection were evaluated. A Sprinkling Application Rate (SPAR79) model, used the heat budget approach to determine the rate of heat lost by the plant part through radiative, convective, and latent heat transfer processes at the actual plant part temperature and at the plant part's critical temperature. The difference between these two rates of heat loss is the rate at which heat must be added by the latent heat of fusion liberated as the applied water freezes. This model added consideration of humidity and ice accumulation to a refinement of the heat budget configuration of earlier models. It showed that humidity is not a contributing factor in the determination of application rates. Ice accumulation was shown to decrease the required application rate by 67% when it increased the characteristic plant part size from 0.2 to 2.0 cm. A distribution factor, a component of a factor previously only estimated, was shown to increase by 30% (from 1.0 to 1.3) as blossom and leaf development progressed. Pulsed sprinkling for frost protection was carried out during six frost nights. Blossom temperatures, application rate, pulsing cycle, wind speed and air temperature were simultaneously recorded. These observations illustrated that in order to provide adequate protection an appropriate application rate and pulsing cycle must be provided by the model. It was concluded that by varying the distribution factor through the growing season and varying the application rate through a single frost night by pulsing, according to atmospheric parameters and ice accumulation, a significant decrease in amount of water applied may be realized. This decrease in water applied will alleviate ice buildup, water cost, soil drainage and nutrient leaching problems associated with sprinkling for frost protection.
Evaluation and refinement of sprinkler application rate models used in frost protection
Perry, K.B.
1979-01-01
Two models of the sprinkled orchard which predict the application rates required for successful frost protection were evaluated. A Sprinkling Application Rate (SPAR79) model, used the heat budget approach to determine the rate of heat lost by the plant part through radiative, convective, and latent heat transfer processes at the actual plant part temperature and at the plant part's critical temperature. The difference between these two rates of heat loss is the rate at which heat must be added by the latent heat of fusion liberated as the applied water freezes. This model added consideration of humidity and ice accumulation to a refinement of the heat budget configuration of earlier models. It showed that humidity is not a contributing factor in the determination of application rates. Ice accumulation was shown to decrease the required application rate by 67% when it increased the characteristic plant part size from 0.2 to 2.0 cm. A distribution factor, a component of a factor previously only estimated, was shown to increase by 30% (from 1.0 to 1.3) as blossom and leaf development progressed. Pulsed sprinkling for frost protection was carried out during six frost nights. Blossom temperatures, application rate, pulsing cycle, wind speed and air temperature were simultaneously recorded. These observations illustrated that in order to provide adequate protection an appropriate application rate and pulsing cycle must be provided by the model. It was concluded that by varying the distribution factor through the growing season and varying the application rate through a single frost night by pulsing, according to atmospheric parameters and ice accumulation, a significant decrease in amount of water applied may be realized. This decrease in water applied will alleviate ice buildup, water cost, soil drainage and nutrient leaching problems associated with sprinkling for frost protection.
An Experimentally Validated SOA Model for High-Bit Rate System Applications
Hasan I. Saleheen
2003-01-01
A comprehensive model of the Semiconductor Optical Amplifier with experimental validation result is presented. This model accounts for various physical behavior of the device which is necessary for high bit-rate system application.
The principle of maximum entropy and its applications in ecology%最大熵原理及其在生态学研究中的应用
邢丁亮; 郝占庆
2011-01-01
The principle of maximum entropy (MaxEnt) was originally studied in information theory and statistical mechanics, and was widely employed in a variety of contexts.MaxEnt provides a statistical inference of unknown distributions on the basis of partial knowledge without taking into any unknown information.Recently there has been growing interest in the use of MaxEnt in ecology.In this review, to provide an intuitive understanding of this principle, we firstly employ an example of dice throwing to demonstrate the underlying basis of MaxEnt, and list the steps one should take when applying this principle.Then we focus on its applications in some fields of ecology and biodiversity, including the predicting of species relative abundances using community aggregated traits (CATs), the MaxEnt niche model of biogeography based on environmental factors,the studying of macroecology patterns such as species abundance distribution (SAD) and species-area relationship (SAR), inferences of species interactions using species abundance matrix or merely occurrence (presence/absence) data, and the predicting of food web degree distributions.We also highlight the main debates about these applications and some recent tests of these models' strengths and limitations.We conclude with the discussion of some matters of attention ecologists should keep in mind when using MaxEnt.%最大熵原理(the principle of maximum entropy)起源于信息论和统计力学,是基于有限的已知信息对未知分布进行无偏推断的一种数学方法.这一方法在很多领域都有成功应用,但只是近几年才被应用到生态学研究中,并且还存在很多争论.我们从基本概念和方法出发,用掷骰子的例子阐明了最大熵原理的概念,并提出运用最大熵原理解决问题需要遵从的步骤.最大熵原理在生态学中的应用主要包括以下方面:(1)用群落水平功能性状的平均值作为约束条件来预测群落物种相对多度的模型;(2)
Mia, S.; van Groeningen, J.W.; Van de Voorde, T.F.J.; Oram, N.J.; Bezemer, T.M.; Mommer, L.; Jeffery, S.
2014-01-01
Increased biological nitrogen fixation (BNF) by legumes has been reported following biochar application to soils, but the mechanisms behind this phenomenon remain poorly elucidated. We investigated the effects of different biochar application rates on BNF in red clover (Trifolium pratense L.). Red
Pre-plant soil application of Brassica seed meal (SM) formulations can provide fumigant level control of apple replant disease. However, due to high cost of the SM treatment relative to non-tarped soil fumigation, reduced application rates would likely accelerate commercial adoption of this technolo...
As remote sensing and variable rate technology are becoming more available for aerial applicators, practical methodologies on effective integration of these technologies are needed for site-specific aerial applications of crop production and protection materials. The objectives of this study were to...
Mia, S.; van Groeningen, J.W.; Van de Voorde, T.F.J.; Oram, N.J.; Bezemer, T.M.; Mommer, L.; Jeffery, S.
2014-01-01
Increased biological nitrogen fixation (BNF) by legumes has been reported following biochar application to soils, but the mechanisms behind this phenomenon remain poorly elucidated. We investigated the effects of different biochar application rates on BNF in red clover (Trifolium pratense L.). Red c
Quality and efficiency of apple orchard protection affected by sprayer type and application rate
A. D. Sedlar
2013-11-01
Full Text Available The goal of this work was to evaluate the potential of reduced application rates in apple trees as well as the potential of selective spray applications by using sensor-based tree detection techniques in Serbian fruit production. Their economical and biological effect was evaluated based on the quality and efficiency of the crop protection and techno-economic analysis. Results showed that during suitable weather conditions and with properly adjusted sprayer settings, a reduced application rate of 381 L ha-1 gave same quality of crop protection as a medium application rate of 759 L ha-1. A two-year efficiency trial on Venturia inaequalis and Podosphaera leucitricha infecting apple also showed that there was no significant difference in crop protection results for different types of orchard application techniques and application rates. The techno-economic analysis showed that selective application should be introduced in practice in areas >3-ha given that the cost of their introduction pays off after 2-3 seasons. Every subsequent season would give a clear economic profit. Besides the economic benefits, selective application technique also has a significant positive ecological effect due to reduction of spray losses and the amount of plant protection products used.
最大熵算法在气象雨量预测中的应用分析%Application of Maximum Entropy Algorithm Meteorological Rainfall Prediction
王海燕
2014-01-01
将最大熵原理的计算方法应用到气象雨量预测中，通过有效的仿真实验能够证明最大熵方法在气象雨量预测中的可行性。%The calculation of the maximum entropy principle is applied to the weather forecast rainfall through effective simulation experiment to prove the feasibility of the maximum entropy method of meteorological rainfall prediction.
Application of time-hopping UWB range-bit rate performance in the UWB sensor networks
Nascimento, J.R.V. do; Nikookar, H.
2008-01-01
In this paper, the achievable range-bit rate performance is evaluated for Time-Hopping (TH) UWB networks complying with the FCC outdoor emission limits in the presence of Multiple Access Interference (MAI). Application of TH-UWB range-bit rate performance is presented for UWB sensor networks. Result
A real-time phoneme counting algorithm and application for speech rate monitoring.
Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava
2017-03-01
Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice.
Zhao, W.; Cella, M.; Pasqua, O. Della; Burger, D.M.; Jacqz-Aigrain, E.
2012-01-01
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT: Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in
Yield performance of upland rice cultivars at different rates and times of nitrogen application
José Hildernando Bezerra Barreto
2012-04-01
Full Text Available Nitrogen is the most important nutrient for rice (Oryza sativa L yields. This study aimed to evaluate the response of upland rice cultivars to N rate and application times in a randomized block design, in subdivided plots with four replications. The studied factors were five rice cultivars (BRS MG Curinga, BRS Monarca, BRS Pepita, BRS Primavera, and BRS Sertaneja, three application times (100 % at planting, 50 % at planting - 50 % at tillering and 100 % at tillering and four N rates (0, 50, 100, and 150 kg ha-1. All cultivars responded to increased rates and different times of N application, especially BRS Primavera and BRS Sertaneja, which were the most productive when 50 % N rates were applied at sowing and 50 % at tillering. The response of cultivar BRS Monarca to N fertilization was best when 100 % of the fertilizer was applied at tillering.
Iqbal, Javed; Mitchell, David C; Barker, Daniel W; Miguez, Fernando; Sawyer, John E; Pantoja, Jose; Castellano, Michael J
2015-05-01
Little information exists on the potential for N fertilizer application to corn ( L.) to affect NO emissions during subsequent unfertilized crops in a rotation. To determine if N fertilizer application to corn affects NO emissions during subsequent crops in rotation, we measured NO emissions for 3 yr (2011-2013) in an Iowa, corn-soybean [ (L.) Merr.] rotation with three N fertilizer rates applied to corn (0 kg N ha, the recommended rate of 135 kg N ha, and a high rate of 225 kg N ha); soybean received no N fertilizer. We further investigated the potential for a winter cereal rye ( L.) cover crop to interact with N fertilizer rate to affect NO emissions from both crops. The cover crop did not consistently affect NO emissions. Across all years and irrespective of cover crop, N fertilizer application above the recommended rate resulted in a 16% increase in mean NO flux rate during the corn phase of the rotation. In 2 of the 3 yr, N fertilizer application to corn (0-225 kg N ha) did not affect mean NO flux rates from the subsequent unfertilized soybean crop. However, in 1 yr after a drought, mean NO flux rates from the soybean crops that received 135 and 225 kg N ha N application in the corn year were 35 and 70% higher than those from the soybean crop that received no N application in the corn year. Our results are consistent with previous studies demonstrating that cover crop effects on NO emissions are not easily generalizable. When N fertilizer affects NO emissions during a subsequent unfertilized crop, it will be important to determine if total fertilizer-induced NO emissions are altered or only spread across a greater period of time.
Liu, Weidong
2009-01-01
In this paper, Cram\\'{e}r type moderate deviations for the maximum of the periodogram and its studentized version are derived. The results are then applied to a simultaneous testing problem in gene expression time series. It is shown that the level of the simultaneous tests is accurate provided that the number of genes $G$ and the sample size $n$ satisfy $G=\\exp(o(n^{1/3}))$.
Evaluation of mHealth Applications Quality Based on User Ratings.
Pustozerov, Evgenii; von Jan, Ute; Albrecht, Urs-Vito
2016-01-01
The presented study covers the evaluation of ratings of a sample set of 46,430 medical applications available on Google's Play Store. It was discovered that the distribution of user ratings given to applications has a log-normal form and has a correlation with many application characteristics one would not expect to be directly related, among others the time of the last update of the app, the app vocabulary as well as descriptions. Popular applications with a large number of downloads and reviews tend to have average ratings, while the ratings of rarely downloaded apps tend to be either highly positive or negative. Despite the huge diversity and number of applications available in the market, it is highly dominated by just a few apps: 90.7% of the overall number of user ratings assigned to the apps in our sample is distributed among just 1,601 apps, corresponding to 4.1% of all apps provided within the "Medical" and "Health and fitness" categories at the time of our evaluation.
Livingston, Richard A.; Jin, Shuang
2005-05-01
Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Ricotti, Rosalinda; Vavassori, Andrea; Bazani, Alessia; Ciardo, Delia; Pansini, Floriana; Spoto, Ruggero; Sammarco, Vittorio; Cattani, Federica; Baroni, Guido; Orecchia, Roberto; Jereczek-Fossa, Barbara Alicja
2016-12-01
Dosimetric assessment of high dose rate (HDR) brachytherapy applicators, printed in 3D with acrylonitrile butadiene styrene (ABS) at different infill percentage. A low-cost, desktop, 3D printer (Hamlet 3DX100, Hamlet, Dublin, IE) was used for manufacturing simple HDR applicators, reproducing typical geometries in brachytherapy: cylindrical (common in vaginal treatment) and flat configurations (generally used to treat superficial lesions). Printer accuracy was investigated through physical measurements. The dosimetric consequences of varying the applicator's density by tuning the printing infill percentage were analysed experimentally by measuring depth dose profiles and superficial dose distribution with Gafchromic EBT3 films (International Specialty Products, Wayne, NJ). Dose distributions were compared to those obtained with a commercial superficial applicator. Measured printing accuracy was within 0.5mm. Dose attenuation was not sensitive to the density of the material. Surface dose distribution comparison of the 3D printed flat applicators with respect to the commercial superficial applicator showed an overall passing rate greater than 94% for gamma analysis with 3% dose difference criteria, 3mm distance-to-agreement criteria and 10% dose threshold. Low-cost 3D printers are a promising solution for the customization of the HDR brachytherapy applicators. However, further assessment of 3D printing techniques and regulatory materials approval are required for clinical application. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Klein, I. M.; Rousseau, A. N.; Gagnon, P.; Frigon, A.
2012-12-01
Probable Maximum Snow Accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood. A robust methodology for evaluating the PMSA is imperative so the resulting spring probable maximum flood is neither overestimated, which would mean financial losses, nor underestimated, which could affect public safety. In addition, the impact of climate change needs to be considered since it is known that solid precipitation in some Nordic landscapes will in all likelihood intensify over the next century. In this paper, outputs from different simulations produced by the Canadian Regional Climate Model are used to estimate PMSAs for southern Quebec, Canada (44.1°N - 49.1°N; 68.2°W - 75.5°W). Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationary tests indicate that climate change will not only affect precipitation and temperature but also the monthly maximum precipitable water and the ensuing maximization ratio r. The maximization ratio r is used to maximize "efficient" snowfall events; and represents the ratio of the 100-year precipitable water of a given month divided by the snowstorm precipitable water. A computational method was developed to maximize precipitable water using a non-stationary frequency analysis. The method was carefully adapted to the spatial and temporal constraints embedded in the resolution of the available simulation data. For example, for a given grid cell and time step, snow and rain may occur simultaneously. In this case, the focus is restricted to snow and snow-storm-conditions only, thus rainfall and humidity that could lead to rainfall are neglected. Also, the temporal resolution cannot necessarily capture the full duration of actual snow storms. The threshold for a snowstorm to be maximized and the duration resulting from considered time steps are adjusted in order to obtain a high percentage of maximization ratios below
Convergence rates for rank-based models with applications to portfolio theory
Ichiba, Tomoyuki; Shkolnikov, Mykhaylo
2011-01-01
We determine rates of convergence of rank-based interacting diffusions and semimartingale reflecting Brownian motions to equilibrium. Convergence rate for the total variation metric is derived using Lyapunov functions. Sharp fluctuations of additive functionals are obtained using Transportation Cost-Information inequalities for Markov processes. We work out various applications to the rank-based abstract equity markets used in Stochastic Portfolio Theory. For example, we produce quantitative bounds, including constants, for fluctuations of market weights and occupation times of various ranks for individual coordinates. Another important application is the comparison of performance between symmetric functionally generated portfolios and the market portfolio. This produces estimates of probabilities of "beating the market".
Chemical Reaction Rates from Ring Polymer Molecular Dynamics: Theory and Practical Applications
Suleimanov, Yury V; Guo, Hua
2016-01-01
This Feature Article presents an overview of the current status of Ring Polymer Molecular Dynamics (RPMD) rate theory. We first analyze theory and its connection to quantum transition state theory. We then focus on its practical application to prototypical chemical reactions in the gas phase, which demonstrate how accurate and reliable RPMD is for calculating thermal chemical reaction rates in multifarious cases. This review serves as an important checkpoint in RPMD rate theory development, which shows that RPMD is shifting from being just one of recent novel ideas to a well-established and validated alternative to conventional techniques for calculating thermal chemical rates. We also hope it will motivate further applications of RPMD to various chemical reactions.
Suleimanov, Yury V; Aoiz, F Javier; Guo, Hua
2016-11-03
This Feature Article presents an overview of the current status of ring polymer molecular dynamics (RPMD) rate theory. We first analyze the RPMD approach and its connection to quantum transition-state theory. We then focus on its practical applications to prototypical chemical reactions in the gas phase, which demonstrate how accurate and reliable RPMD is for calculating thermal chemical reaction rate coefficients in multifarious cases. This review serves as an important checkpoint in RPMD rate theory development, which shows that RPMD is shifting from being just one of recent novel ideas to a well-established and validated alternative to conventional techniques for calculating thermal chemical rate coefficients. We also hope it will motivate further applications of RPMD to various chemical reactions.
Wang, Xiao-Juan; Jia, Zhi-Kuan; Liang, Lian-You; Ding, Rui-Xia; Wang, Min; Li, Han
2012-02-01
A 4-year field experiment was conducted at the Heyang Research Station in Weibei dryland to study the effects of organic fertilizer application rate on the leaf photosynthetic characteristics and grain yield of dryland maize. Comparing with applying chemical fertilizer, applying organic fertilizer increased the leaf photosynthetic rate and stomatal conductance, but decreased the leaf intercellular CO2 concentration at each growth stage of maize significantly. With the increasing application rate of organic fertilizer, the leaf photosynthetic rate and stomatal conductance at each growth stage of maize had a gradual increase, while the leaf intercellular CO2 concentration had a gradual decrease. The leaf photosynthesis of maize at each growth stage was controlled by non-stomatal factors, and the application of organic fertilizer reduced the non-stomatal limitation on the photosynthesis performance significantly. The 4-year application of organic fertilizer improved soil nutrient status, and soil nutrients were no longer the main factors limiting the leaf photosynthetic rate and grain yield of maize.
Rainfall timing and poultry litter application rate effects on phosphorus loss in surface runoff.
Schroeder, P D; Radcliffe, D E; Cabrera, M L
2004-01-01
Phosphorus (P) in runoff from pastures amended with poultry litter may be a significant contributor to eutrophication of lakes and streams in Georgia and other areas in the southeastern United States. The objectives of this research were to determine the effects of litter application rate and initial runoff timing on the long-term loss of P in runoff from surface-applied poultry litter and to develop equations that predict P loss in runoff under these conditions. Litter application rates of 2, 7, and 13 Mg ha(-1), and three rainfall scenarios applied to 1- x 2-m plots in a 3 x 3 randomized complete block design with three replications. The rainfall scenarios included (i) sufficient rainfall to produce runoff immediately after litter application; (ii) no rainfall for 30 d after litter application; and (iii) small rainfall events every 7 d (5 min at 75 mm h(-1)) for 30 d. Phosphorus loss was greatest from the high litter rate and immediate runoff treatments. Nonlinear regression equations based on the small plot study produced fairly accurate (r(2) = 0.52-0.62) prediction of P concentrations in runoff water from larger (0.75 ha) fields over a 2-yr period. Predicted P concentrations were closest to observed values for events that occurred shortly after litter application, and the relative error in predictions increased with time after litter application. In addition, previously developed equations relating soil test P levels to runoff P concentrations were ineffective in the presence of surface-applied litter.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Solomon eTesfamariam
2015-10-01
Full Text Available This paper presents a seismic performance evaluation framework using two engineering demand parameters, i.e. maximum and residual inter-story drift ratios, and with consideration of mainshock-aftershock (MSAS earthquake sequences. The evaluation is undertaken within a performance-based earthquake engineering framework in which seismic demand limits are defined with respect to the earthquake return period. A set of 2-, 4-, 8-, and 12-story non-ductile reinforced concrete buildings, located in Victoria, British Colombia, Canada, is considered as a case study. Using 50 mainshock and MSAS earthquake records (two horizontal components per record, incremental dynamic analysis is performed, and the joint probability distribution of maximum and residual inter-story drift ratios is modeled using a novel copula technique. The results are assessed both for collapse and non-collapse limit states. From the results, it can be shown that the collapse assessment of 4- to 12-story buildings is not sensitive to the consideration of MSAS seismic input, whereas for the 2-story building, a 13% difference in the median collapse capacity is caused by the MSAS. For unconditional probability of unsatisfactory seismic performance, which accounts for both collapse and non-collapse limit states, the life safety performance objective is achieved, but it fails to satisfy the collapse prevention performance objective. The results highlight the need for the consideration of seismic retrofitting for the non-ductile reinforced concrete structures.
Neurodevelopmental Delay Diagnosis Rates Are Increased in a Region with Aerial Pesticide Application
Steven D. Hicks
2017-05-01
Full Text Available A number of studies have implicated pesticides in childhood developmental delay (DD and autism spectrum disorder (ASD. The influence of the route of pesticide exposure on neurodevelopmental delay is not well defined. To study this factor, we examined ASD/DD diagnoses rates in an area near our regional medical center that employs yearly aerial pyrethroid pesticide applications to combat mosquito-borne encephalitis. The aim of this study was to determine if areas with aerial pesticide exposure had higher rates of ASD/DD diagnoses. This regional study identified higher rates of ASD/DD diagnoses in an area with aerial pesticides application. Zip codes with aerial pyrethroid exposure were 37% more likely to have higher rates of ASD/DD (adjusted RR = 1.37, 95% CI = 1.06–1.78, p = 0.02. A Poisson regression model controlling for regional characteristics (poverty, pesticide use, population density, and distance to medical center, subject characteristics (race and sex, and local birth characteristics (prematurity, low birthweight, and birth rates identified a significant relationship between aerial pesticide use and ASD/DD rates. The relationship between pesticide application and human neurodevelopment deserves additional study to develop safe and effective methods of mosquito prevention, particularly as communities develop plans for Zika virus control.
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Boyaval, S.
2000-06-15
This PhD presents a study on a series of high pressure swirl atomizers dedicated to Gasoline Direct Injection (GDI). Measurements are performed in stationary and pulsed working conditions. A great aspect of this thesis is the development of an original experimental set-up to correct multiple light scattering that biases the drop size distributions measurements obtained with a laser diffraction technique (Malvern 2600D). This technique allows to perform a study of drop size characteristics near the injector tip. Correction factors on drop size characteristics and on the diffracted intensities are defined from the developed procedure. Another point consists in applying the Maximum Entropy Formalism (MEF) to calculate drop size distributions. Comparisons between experimental distributions corrected with the correction factors and the calculated distributions show good agreement. This work points out that the mean diameter D{sub 43}, which is also the mean of the volume drop size distribution, and the relative volume span factor {delta}{sub v} are important characteristics of volume drop size distributions. The end of the thesis proposes to determine local drop size characteristics from a new development of deconvolution technique for line-of-sight scattering measurements. The first results show reliable behaviours of radial evolution of local characteristics. In GDI application, we notice that the critical point is the opening stage of the injection. This study shows clearly the effects of injection pressure and nozzle internal geometry on the working characteristics of these injectors, in particular, the influence of the pre-spray. This work points out important behaviours that the improvement of GDI principle ought to consider. (author)
Valter Abrantes Pereira da Silva
2007-03-01
Full Text Available OBJETIVO: O presente estudo objetivou comparar os valores de freqüência cardíaca máxima (FCmáx medidos durante um teste de esforço progressivo (TEP, com os obtidos através de equações de predição, em idosas brasileiras. MÉTODOS: Um TEP máximo sob o protocolo modificado de Bruce, realizado em esteira, foi utilizado para obtenção dos valores de referência da freqüência cardíaca máxima (FCmáx, em 93 mulheres idosas (67,1±5,16 anos. Os valores obtidos foram comparados aos estimados pelas equações "220 - idade" e a de Tanaka e cols., através da ANOVA, para amostras repetidas. A correlação e a concordância entre os valores medidos e os estimados foram testadas. Adicionalmente, a correlação entre os valores de FCmáx medidos e a idade das voluntárias foi examinada. RESULTADOS: Os resultados foram os seguintes: 1 a média da FCmáx atingida no TEP foi de 145,5±12,5 batimentos por minuto (bpm; 2 as equações "220 - idade" e a de Tanaka e cols. (2001 superestimaram significativamente (p OBJECTIVE: This study sought to compare maximum heart rate (HRmax values measured during a graded exercise test (GXT with those calculated from prediction equations in Brazilian elderly women. METHODS: A treadmill maximal graded exercise test in accordance with the modified Bruce protocol was used to obtain reference values for maximum heart rate (HRmax in 93 elderly women (mean age 67.1 ± 5.16. Measured values were compared with those estimated from the "220 - age" and Tanaka et al formulas using repeated-measures ANOVA. Correlation and agreement between measured and estimated values were tested. Also evaluated was the correlation between measured HRmax and volunteers’ age. RESULTS: Results were as follows: 1 mean HRmax reached during GXT was 145.5 ± 12,5 beats per minute (bpm; 2 both the "220 - age" and Tanaka et al (2001 equations significantly overestimated (p < 0.001 HRmax by a mean difference of 7.4 and 15.5 bpm, respectively; 3
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
Yuan-Hong Jiang
Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.
R Wave Extraction Based on the Maximum First Derivative plus the Maximum Value of the Double Search
Wen-po Yao; Wen-li Yao; Min Wu; Tie-bing Liu
2016-01-01
R-wave detection is the main approach for heart rate variability analysis and clinical application based on R-R interval. The maximum ifrst derivative plus the maximum value of the double search algorithm is applied on electrocardiogram (ECG) of MIH-BIT Arrhythmia Database to extract R wave. Through the study of algorithm's characteristics and R-wave detection method, data segmentation method is modified to improve the detection accuracy. After segmentation modification, average accuracy rate of 6 sets of short ECG data increase from 82.51% to 93.70%, and the average accuracy rate of 11 groups long-range data is 96.61%. Test results prove that the algorithm and segmentation method can accurately locate R wave and have good effectiveness and versatility, but may exist some undetected problems due to algorithm implementation.
Value at risk using financial copulas: Application to the Mexican exchange rate (2002-2011
Tania Nadiezhda Plascencia Cuevas
2012-12-01
Full Text Available Nowadays, the volatility of exchange rate is a crucial and a transcendental issue for all transactions, negotiations and operations taking place in foreign currency, being an objective and an accurate prediction the cornerstone. Therefore, the main objective of this research is to analyze whether the Mexican exchange rate market, risk assessment using traditional VaR and VaR with copulas methodologies are more accurate when the estimates are made for a wide historical time-series or two periods for certain, helping it to predict the maximum losses that may be, with the main motivation to have a efficient hedging strategy. The principal conclusion is that assessing risk with these methodologies, the series does not necessarily have to include more than five years, considering that the use of copulas as a dependent measure make that the prediction fits better to the movements of the real returns.
TAO Wen-liang; WEI Tao
2006-01-01
This research is carried out on the basis of Constant Strain Rate(CSR) to measure creep internal stress. Measurements of creep internal stress are conducted on the material test machine by using the CSR method. A mathematical model of creep internal stress is also proposed and its application is presented in this paper.
CERTAINTY EQUIVALENCE FOR DETERMINATION OF OPTIMAL FERTILIZER APPLICATION RATES WITH CARRY-OVER
Taylor, C. Robert
1983-01-01
This note demonstrates that a certain class of stochastic problems for determination of optimal fertilizer application rates in the presence of fertilizer carry-over can be simplified to static, certainly equivalent problems. Conditions required for certainty equivalence to hold are: (1) fertilizer carry-over is agronomically equivalent to applied fertilizer; and (2) some addition of fertilizer is optimal in every decision period.
Statistical rate theory and kinetic energy-resolved ion chemistry: theory and applications.
Armentrout, P B; Ervin, Kent M; Rodgers, M T
2008-10-16
Ion chemistry, first discovered 100 years ago, has profitably been coupled with statistical rate theories, developed about 80 years ago and refined since. In this overview, the application of statistical rate theory to the analysis of kinetic-energy-dependent collision-induced dissociation (CID) reactions is reviewed. This procedure accounts for and quantifies the kinetic shifts that are observed as systems increase in size. The statistical approach developed allows straightforward extension to systems undergoing competitive or sequential dissociations. Such methods can also be applied to the reverse of the CID process, association reactions, as well as to quantitative analysis of ligand exchange processes. Examples of each of these types of reactions are provided and the literature surveyed for successful applications of this statistical approach to provide quantitative thermochemical information. Such applications include metal-ligand complexes, metal clusters, proton-bound complexes, organic intermediates, biological systems, saturated organometallic complexes, and hydrated and solvated species.
Cross‐Layer optimization of the packet loss rate in mobile videoconferencing applications
R. Rivera‐Rodríguez
2010-04-01
Full Text Available Videoconferencing transmission over wireless channels presents relevant challenges in mobile scenarios at vehicular speeds.Previous contributions are focused on the optimization of the transmission of multimedia and delay‐sensitive applications overthe forward link. In this paper, a new Quality of Service (QoS parameter adaptation scheme is proposed. This scheme appliesthe Cross‐Layer Design technique on the reverse link of an 1xEV‐DO Revision 0 channel. As the wireless channel parameters andthe vehicle speed have significant influence in the network layer packet loss rate, it is proposed that the data rate generated bythe application adapts itself to the throughput offered by the lower layers as a function of such packet loss rate. Simulations ofthe proposed model show a significant reduction in losses caused by wireless channel impairments and vehicle mobility,resulting in an improvement in the performance of the mobile videoconferencing session.
Shutdown dose rate assessment with the Advanced D1S method: Development, applications and validation
Villari, R., E-mail: rosaria.villari@enea.it [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Fischer, U. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Moro, F. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Pereslavtsev, P. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Petrizzi, L. [European Commission, DG Research and Innovation K5, CDMA 00/030, B-1049 Brussels (Belgium); Podda, S. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Serikov, A. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2014-10-15
Highlights: Development of Advanced-D1S for shutdown dose rate calculations; Recent applications of the tool to tokamaks; Summary of the results of benchmarking with measurements and R2S calculations; Limitations and further development. Abstract: The present paper addresses the recent developments and applications of Advanced-D1S to the calculations of shutdown dose rate in tokamak devices. Results of benchmarking with measurements and Rigorous 2-Step (R2S) calculations are summarized and discussed as well as limitations and further developments. The outcomes confirm the essential role of the Advanced-D1S methodology and the evidence for its complementary use with the R2Smesh approach for the reliable assessment of shutdown dose rates and related statistical uncertainties in present and future fusion devices.
Cullen, Michael W; Reed, Darcy A; Halvorsen, Andrew J; Wittich, Christopher M; Kreuziger, Lisa M Baumann; Keddis, Mira T; McDonald, Furman S; Beckman, Thomas J
2011-03-01
To determine whether standardized admissions data in residents' Electronic Residency Application Service (ERAS) submissions were associated with multisource assessments of professionalism during internship. ERAS applications for all internal medicine interns (N=191) at Mayo Clinic entering training between July 1, 2005, and July 1, 2008, were reviewed by 6 raters. Extracted data included United States Medical Licensing Examination scores, medicine clerkship grades, class rank, Alpha Omega Alpha membership, advanced degrees, awards, volunteer activities, research experiences, first author publications, career choice, and red flags in performance evaluations. Medical school reputation was quantified using U.S. News & World Report rankings. Strength of comparative statements in recommendation letters (0 = no comparative statement, 1 = equal to peers, 2 = top 20%, 3 = top 10% or "best") were also recorded. Validated multisource professionalism scores (5-point scales) were obtained for each intern. Associations between application variables and professionalism scores were examined using linear regression. The mean ± SD (minimum-maximum) professionalism score was 4.09 ± 0.31 (2.13-4.56). In multivariate analysis, professionalism scores were positively associated with mean strength of comparative statements in recommendation letters (β = 0.13; P = .002). No other associations between ERAS application variables and professionalism scores were found. Comparative statements in recommendation letters for internal medicine residency applicants were associated with professionalism scores during internship. Other variables traditionally examined when selecting residents were not associated with professionalism. These findings suggest that faculty physicians' direct observations, as reflected in letters of recommendation, are useful indicators of what constitutes a best student. Residency selection committees should scrutinize applicants' letters for strongly favorable
Application of Maximum Entropy Method on Option Pricing%最大熵方法在组合期权定价中的应用
董莹; 季鑫
2012-01-01
在欧式期权的基础上,采用最大熵方法,求得无偏差的概率分布,对组合期权进行定价与求解.在此过程中,应用自融资无套利市场原理作为变化的基础,在无风险资产同时存在的条件下,通过惩罚函数法及BFGS算法的综合应用进行价格求解,使组合期权定价方法更为准确.%On the basis of the European option,combined option pricing can be measured and solved by a series of probability distribution which can be produced by the maximum entropy method.Regarding the theory of the self-financing and no-arbitrage as the basis of changing,the combined option pricing can be made in the condition of risk-free assets by the methods of penalty function and BFGS algorithm,which makes the method of combined option pricing can be settled accurately.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
QIN Xiao-min
2015-08-01
Full Text Available Field trials were carried out to investigate the effects of different nitrogen application rates N0(0 kg·hm-2, N1(125 kg·hm-2, N2 (250 kg·hm-2and N3(375 kg·hm-2on the rhizosphere microbial population and metabolic function diversity of maize and potato under intercropping using plate culture method and BIOLOG technique. The results indicated that nitrogen(N1, N2 and N3application increased the amounts of bacteria, actinomyces and total microbes, but decreased the quantities of fungi significantly in rhizosphere soil of maize and potato in intercropping, and the highest increment was with N2 treatment. In comparison with N0, nitrogen fertilizer application could increase significantly the diversities of soil microbial community, the utilization rate of carbon source, richness of soil microbial community. And the AWCD value, Shannon-Wiener index(H, Simpson index(D, Evenness index(Eand Richness index(Sin rhizosphere soil of maize under intercropping were the highest at N3 treatment, while that of potato were the highest at N2 treatment, but the effects of different N application rates on the ability of rhizospheric microbes in utilizing six types of carbon sources were different. Principal component analysis (PCAand cluster analysis showed that there were differences in carbon substrate utilization patterns and metabolic characteristics of the soil microbes in maize and potato intercropping with different N application rates. It suggested that applying N could regulate the rhizosphere soil microbial communities and promote the functional diversity of crop intercropping.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Maximum likelihood for genome phylogeny on gene content.
Zhang, Hongmei; Gu, Xun
2004-01-01
With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
徐革锋; 尹家胜; 韩英; 刘洋; 牟振波
2014-01-01
This study examined the effects of water temperature on the metabolic characteristics and aerobic exer-cise capacity of juvenile manchurian trout , Brachymystax lenok ( Pallas) .The resting metabolic rate ( RMR) ,maxi-mum metabolic rate (MMR), metabolic scope(MS)and critical swimming speed (UCrit) of juveniles were measured at different temperature (4, 8, 12, 16, 20℃).The results showed that both the RMR and the MMR increased sig-nificantly with the increasing of water temperature ( P<0 .05 ) .Compared with test group at 4℃, the RMR for 8℃, 12℃, 16℃ and 20℃increased by 62%, 165%, 390%, 411%,respectively, and the MMR increased by 3%, 34%, 111%, 115%, respectively .However , the MS decreased with the increasing of water temperature with the highest MS occurring at 4℃.UCrit was significantly affected by water temperature (P<0.05), but the varia-tions of UCrit didn′t follow certain pattern with temperature .In the test of aerobic exercise , the MMR for each tem-perature level occurred at the swimming speed of 70% UCrit , probably due to the start of anaerobic metabolism , which caused excessive creatine in body , consequently hindered the aerobic metabolism .%为了探究温度对细鳞鲑（ Brachymystax lenok）幼鱼的代谢特征和有氧运动能力的影响，在不同温度（4℃、8℃、12℃、16℃、20℃）下测定了实验鱼的静止代谢率（ RMR）、有氧运动过程中的最大代谢率（ MMR）以及能量代谢范围（MS）和临界游泳速度（UCrit）。结果表明，随着温度的上升，RMR和MMR均显著提高（P＜0．05），各温度下的RMR和MMR分别较4℃条件的提高了62％（8℃）、165％（12℃）、390％（16℃）、411％（20℃）和3％（8℃）、34％（12℃）、111％（16℃）、115％（20℃）；MS随水温的升高呈现下降的趋势，且4℃条件具有最大的代谢范围。不同温度条件下UCrit存在显著性差异，但随着温度升高未表现出明显的变
Low-noise multichannel ASIC for high count rate X-ray diffractometry applications
Szczygiel, R. [AGH University of Science and Technology, Department of Measurement and Instrumentation, al. Mickiewicza 30, Krakow (Poland)], E-mail: robert.szczygiel@agh.edu.pl; Grybos, P.; Maj, P. [AGH University of Science and Technology, Department of Measurement and Instrumentation, al. Mickiewicza 30, Krakow (Poland); Tsukiyama, A.; Matsushita, K.; Taguchi, T. [Rigaku Corporation, 3-9-12 Matsubara-cho, Akishima-shi, Tokyo (Japan)
2009-08-01
RG64 is a 64-channel ASIC designed for the silicon strip detector readout and optimized for high count rate X-ray imaging applications. In this paper we report on the test results referring to the RG64 noise level, channel uniformity and the operation with a high rate of input signals. The parameters of the RG64-based diffractometry system are compared with the ones based on the scintillation counter. Diffractometry measurement results with silicon strip detectors of different strip lengths and strip pitch are also presented.
A constant air flow rate control of blower for residential applications
Yang, S.M. [Tamkang Univ., Taipei (Taiwan, Province of China). Dept. of Mechanical Engineering
1998-03-01
This paper presents a technique to control a blower for residential applications at constant air flow rate using an induction motor drive. The control scheme combines a variable volt/hertz ratio inverter drive and an average motor current regulation loop to achieve control of the motor torque-speed characteristics, consequently controlling the air flow rate of the blower which the motor is driving. The controller is simple to implement and practical for commercialization. It is also reliable, since no external pressure or air flow sensor is required. Both a theoretical derivation and an experimental verification for the control scheme are presented in this paper.
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR
Ho, Chi-Lin; Fu, Yun-Ching; Lin, Ming-Chih; Chan, Sheng-Ching; Hwang, Betau; Jan, Sheng-Ling
2014-04-01
Heart rate (HR) measurement is essential for children with abnormal heart beats. The purpose of this study was to determine whether HR measurement by smartphone applications (apps) could be a feasible alternative to an electrocardiography (ECG) monitor. A total of 40 children, median age of 4.3 years, were studied. Using four free smartphone apps, pulse rates were measured at the finger (or toe) and earlobe, and compared with baseline HRs measured by ECG monitors. Significant correlations between measured pulse rates and baseline HRs were found. Both correlation and accuracy rate were higher in the earlobe group than the finger/toe group. When HR was app (median of 65 vs 76%). The accuracy rates in the finger/toe group were significantly lower than those in the earlobe group for all apps when HR was ≥ 120 bpm (27 vs 65%). There were differences among apps in their abilities to measure pulse rates. Taking children's pulse rate from the earlobe would be more accurate, especially for tachycardia. However, we do not recommend that smartphone apps should not be used for routine medical use or used as the sole form of HR measurement because the results of their accuracy are not good enough.
Voropaeva, Z. I.
2010-01-01
The comparative assessment of methods for the calculation of the gypsum application rates based on the exchangeable sodium (Gedroits, Schollenberger), the estimated sodium (Schoonover), and the soil’s requirement for calcium (the version of the Omsk State Agrarian University) showed that, for the chemical amelioration of solonetzes with different contents of exchangeable sodium in Western Siberia, it is economically and ecologically advisable to calculate the ameliorant application rates from the estimated sodium. It was experimentally shown that the content of displaced magnesium used by Schoonover is a more efficient unified criterion than the value of the calcium adsorption by zonal soils. For improving the method’s accuracy, it was proposed to change the conditions of the soil preparation by regulating the concentration of the displacing solution, the interaction time, and the temperature.
Improved DCT-based image coding and decoding methods for low-bit-rate applications
Jung, Sung-Hwan; Mitra, Sanjit K.
1994-05-01
The discrete cosine transform (DCT) is well known for highly efficient coding performance, and it is widely used in many image compression applications. However, in low-bit rate coding, it produces undesirable block artifacts that are visually not pleasing. In addition, in many applications, faster compression and easier VLSI implementation of DCT coefficients are also important issues. The removal of the block artifacts and faster DCT computation are therefore of practical interest. In this paper, we outline a modified DCT computation scheme that provides a simple efficient solution to the reduction of the block artifacts while achieving faster computation. We also derive a similar solution for the efficient computation of the inverse DCT. We have applied the new approach for the low-bit rate coding and decoding of images. Initial simulation results on real images have verified the improved performance obtained using the proposed method over the standard JPEG method.
Two-component mixture model: Application to palm oil and exchange rate
Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-12-01
Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.
LI, Xueliang
2010-01-01
Let $\\mathcal {T}^{\\Delta}_n$ denote the set of trees of order $n$, in which the degree of each vertex is bounded by some integer $\\Delta$. Suppose that every tree in $\\mathcal {T}^{\\Delta}_n$ is equally likely. For any given subtree $H$, we show that the number of occurrences of $H$ in trees of $\\mathcal {T}^{\\Delta}_n$ is with mean $(\\mu_H+o(1))n$ and variance $(\\sigma_H+o(1))n$, where $\\mu_H$, $\\sigma_H$ are some constants. As an application, we estimate the value of the Estrada index $EE$ for almost all trees in $\\mathcal {T}^{\\Delta}_n$, and give an explanation in theory to the approximate linear correlation between $EE$ and the first Zagreb index obtained by quantitative analysis.
Dual-rate MIL-STD-1773 fiber optic transceiver for satellite applications
Thelen, Donald C., Jr.; Rankin, Stephen L.; Marshall, Paul W.; LaBel, Kenneth A.; Krainak, Michael A.
1994-06-01
A dual rate 1773 fiber optic transceiver chip for space applications is presented. The transceiver will work with either 1 Mbps, or 20 Mbps Manchester data. The receiver features first bit capture with no preamble for 1 Mbps data, and clock recovery for 20 Mbps data. Single event effects in the photo diode are considered in the receiver design. A transmitter switch is included on the chip for driving an LED. The chip will be fabricated in a radiation hard CMOS process.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Influence of Lime and Phosphorus Application Rates on Growth of Maize in an Acid Soil
Peter Asbon Opala
2017-01-01
Full Text Available The interactive effects of lime and phosphorus on maize growth in an acid soil were investigated in a greenhouse experiment. A completely randomized design with 12 treatments consisting of four lime levels, 0, 2, 10, and 20 t ha−1, in a factorial combination with three phosphorus rates, 0, 30, and 100 kg ha−1, was used. Maize was grown in pots for six weeks and its heights and dry matter yield were determined and soils were analyzed for available P and exchangeable acidity. Liming significantly reduced the exchangeable acidity in the soils. The effect of lime on available P was not significant but available P increased with increasing P rates. There was a significant effect of lime, P, and P by lime interactions on plant heights and dry matter. Without lime application, dry matter increased with increasing P rates but, with lime, dry mattes increased from 0 to 30 kg P ha−1 but declined from 30 to 100 kg P ha−1. The highest dry matter yield (13.8 g pot−1 was obtained with a combined 2 t ha−1 of lime with 30 kg P ha−1 suggesting that lime application at low rates combined with moderate amounts of P would be appropriate in this soil.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Anisimov, M. P.
2016-12-01
One can find in scientific literature a pretty fresh idea of the nucleation rate surfaces design over the diagrams of phase equilibria. That idea looks like profitable for the nucleation theory development and for various practical applications where predictions of theory have no high enough accuracy for today. The common thermodynamics has no real ability to predict parameters of the first order phase transition. Nucleation experiment can be provided in very local nucleation conditions even the nucleation takes place from the critical line (in two-component case) down to the absolute zero temperature limit and from zero nucleation rates at phase equilibria up to the spinodal conditions. Theory predictions have low reliability as a rule. The computational chemistry has chance to make solution of that problem easier when a set of the used axiomatic statements will adapt enough progressive assumptions [1]. Semiempirical design of the nucleation rate surfaces over diagrams of phase equilibria have a potential ability to provide a reasonable quality information on nucleation rate for each channel of nucleation. Consideration and using of the nucleation rate surface topologies to optimize synthesis of a given phase of the target material can be available when data base on nucleation rates over diagrams of phase equilibria will be created.
Cloern, J.E.
1978-01-01
An empirical model of Skeletonema costatum photosynthetic rate is developed and fit to measurements of photosynthesis selected from the literature. Because the model acknowledges existence of: 1) a light-temperature interaction (by allowing optimum irradiance to vary with temperature), 2) light inhibition, 3) temperature inhibition, and 4) a salinity effect, it accurately estimates photosynthetic rates measured over a wide range of temperature, light intensity, and salinity. Integration of predicted instantaneous rate of photosynthesis with time and depth yields daily net carbon assimilation (pg C cell-1 day-1) in a mixed layer of specified depth, when salinity, temperature, daily irradiance and extinction coefficient are known. The assumption of constant carbon quota (pg C cell-1) allows for prediction of mean specific growth rate (day-1), which can be used in numerical models of Skeletonema costatum population dynamics. Application of the model to northern San Francisco Bay clearly demonstrates the limitation of growth by low light availability, and suggests that large population densities of S. costatum observed during summer months are not the result of active growth in the central deep channels (where growth rates are consistently predicted to be negative). But predicted growth rates in the lateral shallows are positive during summer and fall, thus offering a testable hypothesis that shoals are the only sites of active population growth by S. costatum (and perhaps other neritic diatoms) in the northern reach of San Francisco Bay. ?? 1978.
Low Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Tur, Ronen; Friedman, Zvi
2010-01-01
Signals comprised of a stream of short pulses appear in many applications including bio-imaging, radar, and ultrawideband communication. Recently, a new framework, referred to as finite rate of innovation, has paved the way to low rate sampling of such pulses by exploiting the fact that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing approaches are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams of pulses from a minimal number of samples. This extends previous work which assumes that the sampling kernel is an ideal low-pass filter. A compactly supported class of filters, satisfying the mathematical condition, is then introduced, leading to a sampling framework based on compactly supported kernels. We then exte...
Alixon David Reyes Rodríguez
2011-06-01
theoretical points of reference that responded to scientific needs before, but which are insufficient now. It has been observed in national and international conferences, seminaries, research encounters, in our universities and in different kinds of scientific meetings that some obsolete assumptions are still being taught, which slows down progress in Education Sciences and Sports Science. We recognize that some predictive formulas used to calculate the estimated maximum heart rate (EMHR represented progress for Exercise Science and Exercise Physiology, at some point; however, there are important aspects that should be considered. It is not that we despise them, but we intend to demonstrate and demystify the use of the traditional formula almost as the only calculation and measurement pattern for EMHR and, to offer, from the perspective of other researchers, better possibilities of exercise dosage for certain populations with particular characteristics.
OPTIMAL FEED STRATEGY FOR FED-BATCH GLYCEROL FERMENTATION DETERMINED BY MAXIMUM PRINCIPLE
无
2000-01-01
1 IntroductionGlycerol fed-batch fermentation is attractive tocommercial application since it can control theglucose concentration by changing the feed rate andget a high glycerol yield, therefore it is essential todevelop an optimal glucose feed strategy. For mostof fed-batch fermentation, optimization of feed ratewas based on Pontryagin's maximum principle [if.Since the term of feed rate appears linearly in theHamiltonian, the optimal feed rate profile usuallyconsists of ba,lg-bang intervals and singular ...
Deng, Fang; Finer, Gal; Haymond, Shannon; Brooks, Ellen; Langman, Craig B
2015-03-01
Estimating glomerular filtration rate (eGFR) has become popular in clinical medicine as an alternative to measured GFR (mGFR), but there are few studies comparing them in clinical practice. We determined mGFR by iohexol clearance in 81 consecutive children in routine practice and calculated eGFR from 14 standard equations using serum creatinine, cystatin C, and urea nitrogen that were collected at the time of the mGFR procedure. Nonparametric Wilcoxon test, Spearman correlation, Bland-Altman analysis, bias (median difference), and accuracy (P15, P30) were used to compare mGFR with eGFR. For the entire study group, the mGFR was 77.9 ± 38.8 mL/min/1.73 m(2). Eight of the 14 estimating equations demonstrated values without a significant difference from the mGFR value and demonstrated a lower bias in Bland-Altman analysis. Three of these 8 equations based on a combination of creatinine and cystatin C (Schwartz et al. New equations to estimate GFR in children with CKD. J Am Soc Nephrol 2009;20:629-37; Schwartz et al. Improved equations estimating GFR in children with chronic kidney disease using an immunonephelometric determination of cystatin C. Kidney Int 2012;82:445-53; Chehade et al. New combined serum creatinine and cystatin C quadratic formula for GFR assessment in children. Clin J Am Soc Nephrol 2014;9:54-63) had the highest accuracy with approximately 60% of P15 and 80% of P30. In 10 patients with a single kidney, 7 with kidney transplant, and 11 additional children with short stature, values of the 3 equations had low bias and no significant difference when compared with mGFR. In conclusion, the 3 equations that used cystatin C, creatinine, and growth parameters performed in a superior manner over univariate equations based on either creatinine or cystatin C and also had good applicability in specific pediatric patients with single kidneys, those with a kidney transplant, and short stature. Thus, we suggest that eGFR calculations in pediatric clinical practice
O. M. Pshinko
2016-12-01
. At this the control of separate elements is realized by the way of individual mental models construction and application of functioning processes. The procedure for evaluating of the prediction reliability based on multivariate linear extrapolation method was proposed. Practical value. The proposed method of strategic planning of the complex systems’ development based on rating models and developed information technology are representing the complex of automated tools to ensure effective economical and technological control of non-uniform sets of multiparameter objects. The new solutions of typical tasks of strategic planning and development of complex objects management procedure are implemented in the information technology of rating estimation (rating definition, sensitivity analysis, clustering, diagnostics, forecasting, resource allocation, multi-criteria analysis etc.. Application of the proposed information technology can automate the task of analysis and strategic planning of the administrative-territorial complexes. The technology can be used for monitoring, analysis, strategic planning and management of several complex system types simultaneously.
Primary Estimation of Rare Earth Element Maximum Application Quantity in Red Soil%稀土元素在红壤上最大施用量的初步估算
褚海燕; 朱建国; 谢祖彬; 曹志洪; 李振高; 曾青
2001-01-01
The effects of rare earth element lanthanum (La) on soil microbial activities were studied through incubation experiment and rar e earth element maximum application quantity was primarily estimated in red soil . La decreased soil microbial activities and the sensitivity of microbial activi ties to La was decreased in an order of phenol decomposition＞dehydrogenase acti vity＞microbial biomass. When considering from soil microbiology, rare earth ele ment maximum application of in red soil should be below 30 mg/kg.%通过培养试验研究了稀土元素镧对红壤微生物活性 的影响并对初步估计稀土元素在红壤上最大施用量。镧降低了土壤微生物活性，微生物活性 对镧的敏感性由大到小顺序为：酚分解作用＞脱氢酶活性＞微生物生物量。从土壤微生物学 角度，稀土元素在红壤上最大施用量应小于30 mg/kg。
2010-01-01
... rate provisions, applicable in leasing arrangements? 714.8 Section 714.8 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS LEASING § 714.8 Are the early payment provisions, or interest rate provisions, applicable in leasing arrangements? You are not subject to the...
M Nahvi
2011-02-01
Full Text Available Abstract In order to determinate the optimum nitrogen rate and time for split N application based on phonological stages of Bahar hybrid rice, an experiment was conducted over two years (2007-2008 using a randomized complete block design with three replications at Rice Research Institute of Iran (RRII farm, Rasht. Factorial arrangement of the following factors was used: three N rates (90, 120 and 150 kg pure N in hectare and six different phonological stages (T1: using leaf color chart with number 4; T2; 1/3 base fertilizer, 1/3 in rise primary bud stage, 1/3 in booting stage; T3; 1/3 base fertilizer, 1/3 in tillering stage, 1/3 in rise primary bud stage; T4: 1/2 base fertilizer, 1/4 in tillering stage, 1/4 in booting stage; T5: 1/3base fertilizer, 1/3in tillering stage, 1/3 in booting stage; T6; 2/3 base fertilizer, 1/3 in rise primary bud stage with a control plot in each replication. After sowing in bed nursery the twenty days old seedlings were transplanted in the experimental plots. Results of analysis of variance clearly showed significant differences between different phonological stages on many characteristics. Means differences were highly significant for grain yield. The T6 treatment produced the highest yield of 8.373 and 7.920 t ha-1 in two years, respectively. However, 120 kg N ha-1 when applied as T5, produced the highest yield (8.760 t ha-1. Combined analysis of variance showed that yield and yield components.were affected by time of N application. Combined comparison of means also showed that the T5 treatment and 120 kg N ha-1 produced the highest yield of 7.925 and 7.514 t ha-1 in the first and second year, respectively. According to the results, it can be recommended that for Bahar hybrid rice maximum yield could be achieved by application of 120 kg N ha-1 when split as defined in T5 (1/3 based fertilizer + 1/3 first tillering stage + 1/3 booting stage. Keywords: Hybrid rice, Leaf Color Chart (LCC, Nitrogen fertilizer
Mathew J. Gregoski
2012-01-01
Full Text Available Objective. Current generation smartphones' video camera technologies enable photoplethysmographic (PPG acquisition and heart rate (HR measurement. The study objective was to develop an Android application and compare HRs derived from a Motorola Droid to electrocardiograph (ECG and Nonin 9560BT pulse oximeter readings during various movement-free tasks. Materials and Methods. HRs were collected simultaneously from 14 subjects, ages 20 to 58, healthy or with clinical conditions, using the 3 devices during 5-minute periods while at rest, reading aloud under observation, and playing a video game. Correlation between the 3 devices was determined, and Bland-Altman plots for all possible pairs of devices across all conditions assessed agreement. Results. Across conditions, all device pairs showed high correlations. Bland-Altman plots further revealed the Droid as a valid measure for HR acquisition. Across all conditions, the Droid compared to ECG, 95% of the data points (differences between devices fell within the limits of agreement. Conclusion. The Android application provides valid HRs at varying levels of movement free mental/perceptual motor exertion. Lack of electrode patches or wireless sensor telemetric straps make it advantageous for use in mobile-cell-phone-delivered health promotion and wellness programs. Further validation is needed to determine its applicability while engaging in physical movement-related activities.
Tang, Sanyi; Tang, Guangyao; Cheke, Robert A
2010-05-21
Many factors including pest natural enemy ratios, starting densities, timings of natural enemy releases, dosages and timings of insecticide applications and instantaneous killing rates of pesticides on both pests and natural enemies can affect the success of IPM control programmes. To address how such factors influence successful pest control, hybrid impulsive pest-natural enemy models with different frequencies of pesticide sprays and natural enemy releases were proposed and analyzed. With releasing both more or less frequent than the sprays, a stability threshold condition for a pest eradication periodic solution is provided. Moreover, the effects of times of spraying pesticides (or releasing natural enemies) and control tactics on the threshold condition were investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications. Multiple attractors from which the pest population oscillates with different amplitudes can coexist for a wide range of parameters and the switch-like transitions among these attractors showed that varying dosages and frequencies of insecticide applications and the numbers of natural enemies released are crucial. To see how the pesticide applications could be reduced, we developed a model involving periodic releases of natural enemies with chemical control applied only when the densities of the pest reached the given Economic Threshold. The results indicate that the pest outbreak period or frequency largely depends on the initial densities and the control tactics.
Yesim Altay
2014-03-01
Full Text Available Aim: To determine the success rate of different application of mitomycin-c (mmc in endocanalicular multidiode laser dacryocystorhinostomy (ECL-DCR. Material and Method: A prospective comparative study was conducted on 89 patients with primary acquired nasolacrimal duct obstruction, undergoing ECL-DCR procedures. Group 1 was composed of 44 patients undergoing ECL-DCR with intraoperative 0.4 mg/ml mmc application for 2 minutes and group 2 was composed of 45 patients undergoing ECL-DCR with intraoperative 0.4 mg/ml mmc application for 5 minutes. Patients were followed up for at least 12 months.The main outcome measure for success was resolution or improvement of epiphora and patency of nasolacrimal duct with irrigation. Results: Final success was 26/44(%59.1 for group 1, and 36/45 (%80 for group 2.The difference was statistically significant (Chi-siquare, p=0.03. Discussion: Diode laser ECL-DCR with 0.4 mg /ml intraoperative mmc application for 5 minutes appears to be an effective treatment modality for primary nasolacrimal duct obstruction.
Multi-Rate Digital Control Systems with Simulation Applications. Volume II. Computer Algorithms
1980-09-01
34 ~AFWAL-TR-80-31 01 • • Volume II L IL MULTI-RATE DIGITAL CONTROL SYSTEMS WITH SIMULATiON APPLICATIONS Volume II: Computer Algorithms DENNIS G. J...29 Ma -8 - Volume II. Computer Algorithms ~ / ’+ 44MWLxkQT N Uwe ~~ 4 ~jjskYIF336l5-79-C-369~ 9. PER~rORMING ORGANIZATION NAME AND ADDRESS IPROG AMEL...additional options. The analytical basis for the computer algorithms is discussed in Ref. 12. However, to provide a complete description of the program, some
Rate dependent constitutive behavior of dielectric elastomers and applications in legged robotics
Oates, William; Miles, Paul; Gao, Wei; Clark, Jonathan; Mashayekhi, Somayeh; Hussaini, M. Yousuff
2017-04-01
Dielectric elastomers exhibit novel electromechanical coupling that has been exploited in many adaptive structure applications. Whereas the quasi-static, one-dimensional constitutive behavior can often be accurately quantified by hyperelastic functions and linear dielectric relations, accurate predictions of electromechanical, rate-dependent deformation during multiaxial loading is non-trivial. In this paper, an overview of multiaxial electromechanical membrane finite element modeling is formulated. Viscoelastic constitutive relations are extended to include fractional order. It is shown that fractional order viscoelastic constitutive relations are superior to conventional integer order models. This knowledge is critical for transition to control of legged robotic structures that exhibit advanced mobility.
Relationship between polarization characteristics and hemolysis rate and its potential application
Wang, Xuezhen; Lai, Jiancheng; Li, Zhenhua
2017-01-01
Mueller polarimetry has been widely investigated in biomedical field. However the application in hemolysis monitoring is unavailable. This study deals with the backscattering polarimetric characterization of erythrocyte suspensions in low-osmolarity-induced hemolysis condition. Scattering and absorption are observed to decrease with increasing hemolysis. Increasing degree of polarization (DOP) corresponds to a decrease of scattering; increasing diattenuation corresponds to a decrease of scattering. However the decreasing absorption increases first and then decreases DOP in erythrocyte suspensions which are typical of Mie scatterers; reducing diattenuation corresponds to a decrease of absorption. Hence it is demonstrated that higher DOP is preserved in more serious hemolysis condition. In addition, DOP increasing trends are different in the conditions of below/above 6% hemolysis rate. Diatteunation shows an increasing trend at increase of blood hemolysis percentage with an exception when the hemolysis rate ranges from 5% to 6%. These results may be helpful for monitoring of hemolysis by polarimetric optical method.
Turk M.A.
2002-01-01
Full Text Available Field experiments were conducted during the winter seasons of 1998-1999, 1999-2000 and 2000-2001 at the semi-arid region in north of Jordan, to study the effect of seeding dates (14 January, 28 January and 12 February, seeding rates (50, 75 and 100 plants per metre, phosphorus levels (0, 17.5, 35.0 and 52.5 kg P per ha and two methods of P placement (banding and broadcast. Seeding rate, seeding date, and rate of phosphorus had a significant effect on most of the measured traits and the yield determinates. Method of phosphorus application had only a significant effect on seed yield and seed weight per plant. In general high yields were obtained by early seeding (14 January, high seeding rate (100-plant per square metre, and P application (52.5 kg P per ha drilled with the seed after cultivation (banded.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Herry Pintardi Chandra
2014-04-01
Full Text Available During the last ten years, the growth of apartment buildings in Surabaya has encountered the bitter experience of global warming, resource depletion, energy scarcity, and other environmental impacts. We cannot avoid them, but we can minimize the negative impacts of global warming. The green building concept is one of the methods to minimize the environmental impact. It takes into account principles of sustainable development in planning, construction, operation, and maintenance. Greenship Rating Tools is used to evaluate and calculate green achievements, prior to green building certification. The aim of this research is to represent the perceptions of contractors and consultants toward application of Greenship Rating Tools on apartment buildings in Surabaya. Based on the data obtained from a questionnaires survey carried out to 41 respondents, the mean value ranking method is used to evaluate the main factors of Greenship. These factors are Appropriate Site Development, Energy Efficiency and Conservation, Water Conservation, Material Resource and Cycle, Indoor Health and Comfort, and Building Environmental Management. In general, the results of this research show that there are a number of differences between perceptions of contractors and consultants toward application of Greenship Rating Tools on apartment buildings in Surabaya. According to the contractors’ perception, Visual Comfort is a factor that would easily to be applied, whilst consultants’ is Landscape. On the other hand, there are factors that would difficult to be applied. Based on contractors’ perceptiom is Climate Change, while consultants’ perception is Renewal Energy. In summary, Greenship Rating Tools can be applied on contractors’ and consultants’ perceptions, whilst there are some variables which can not be applied.
Ortiz, Brenda V; Perry, Calvin; Sullivan, Dana; Lu, Ping; Kemerait, Robert; Davis, Richard F; Smith, Amanda; Vellidis, George; Nichols, Robert
2012-03-01
Field tests were conducted to determine if differences in response to nematicide application (i.e., root-knot nematode (RKN) populations, cotton yield, and profitability) occurred among RKN management zones (MZ). The MZ were delineated using fuzzy clustering of five terrain (TR) and edaphic (ED) field features related to soil texture: apparent soil electrical conductivity shallow (ECa-shallow) and deep (ECa-deep), elevation (EL), slope (SL), and changes in bare soil reflectance. Zones with lowest mean values of ECa- shallow, ECa- deep, NDVI, and SL were designated as at greater risk for high RKN levels. Nematicide-treated plots (4 rows wide and 30 m long) were established in a randomized complete block design within each zone, but the number of replications in each zone varied from four to six depending on the size of the zone.The nematicides aldicarb (Temik 15 G) and 1,3-dichloropropene (1,3-D,Telone II) were applied at two rates (0.51 and 1.0 kg a.i./ha for aldicarb, and 33.1 and 66.2 kg a.i./ha for 1,3-D) to RKN MZ in commercial fields between 2007 and 2009. A consolidated analysis over the entire season showed that regardless of the zone, there were not differences between aldicarb rates and 1,3-D rates. The result across zones showed that 1,3-D provided better RKN control than did aldicarb in zones with low ECa values (high RKN risk zones exhibiting more coarse-textured sandy soils). In contrast, in low risk zones with relatively higher ECa values (heavier textured soil), the effects of 1,3-D and aldicarb were equal and application of any of the treatments provided sufficient control. In low RKN risk zones, a farmer would often have lost money if a high rate of 1,3-D was applied. This study showed that the effect of nematicide type and rate on RKN control and cotton yield varied across management zones (MZ) with the most expensive treatment likely to provide economic benefit only in zones with coarser soil texture. This study demonstrates the value of site
Ivan M. Buzurovic
2016-08-01
Full Text Available Purpose : In this study, we present the clinical implementation of a novel transoral balloon centering esophageal applicator (BCEA and the initial clinical experience in high-dose-rate (HDR brachytherapy treatment of esophageal cancer, using this applicator. Material and methods: Acceptance testing and commissioning of the BCEA were performed prior to clinical use. Full performance testing was conducted including measurements of the dimensions and the catheter diameter, evaluation of the inflatable balloon consistency, visibility of the radio-opaque markers, congruence of the markers, absolute and relative accuracy of the HDR source in the applicator using the radiochromic film and source position simulator, visibility and digitization of the applicator on the computed tomography (CT images under the clinical conditions, and reproducibility of the offset. Clinical placement of the applicator, treatment planning, treatment delivery, and patient’s response to the treatment were elaborated as well. Results : The experiments showed sub-millimeter accuracy in the source positioning with distal position at 1270 mm. The digitization (catheter reconstruction was uncomplicated due to the good visibility of markers. The treatment planning resulted in a favorable dose distribution. This finding was pronounced for the treatment of the curvy anatomy of the lesion due to the improved repeatability and consistency of the delivered fractional dose to the patient, since the radioactive source was placed centrally within the lumen with respect to the clinical target due to the five inflatable balloons. Conclusions : The consistency of the BCEA positioning resulted in the possibility to deliver optimized non-uniform dose along the catheter, which resulted in an increase of the dose to the cancerous tissue and lower doses to healthy tissue. A larger number of patients and long-term follow-up will be required to investigate if the delivered optimized treatment can
Zhou, Yuhong; Klages, Peter; Tan, Jun; Chi, Yujie; Stojadinovic, Strahinja; Yang, Ming; Hrycushko, Brian; Medin, Paul; Pompos, Arnold; Jiang, Steve; Albuquerque, Kevin; Jia, Xun
2017-06-01
High dose rate (HDR) brachytherapy treatment planning is conventionally performed manually and/or with aids of preplanned templates. In general, the standard of care would be elevated by conducting an automated process to improve treatment planning efficiency, eliminate human error, and reduce plan quality variations. Thus, our group is developing AutoBrachy, an automated HDR brachytherapy planning suite of modules used to augment a clinical treatment planning system. This paper describes our proof-of-concept module for vaginal cylinder HDR planning that has been fully developed. After a patient CT scan is acquired, the cylinder applicator is automatically segmented using image-processing techniques. The target CTV is generated based on physician-specified treatment depth and length. Locations of the dose calculation point, apex point and vaginal surface point, as well as the central applicator channel coordinates, and the corresponding dwell positions are determined according to their geometric relationship with the applicator and written to a structure file. Dwell times are computed through iterative quadratic optimization techniques. The planning information is then transferred to the treatment planning system through a DICOM-RT interface. The entire process was tested for nine patients. The AutoBrachy cylindrical applicator module was able to generate treatment plans for these cases with clinical grade quality. Computation times varied between 1 and 3 min on an Intel Xeon CPU E3-1226 v3 processor. All geometric components in the automated treatment plans were generated accurately. The applicator channel tip positions agreed with the manually identified positions with submillimeter deviations and the channel orientations between the plans agreed within less than 1 degree. The automatically generated plans obtained clinically acceptable quality.
KHAN Sardar; CAO Qing; HESHAM Abd El-Latif; XIA Yue; HE Ji-zheng
2007-01-01
This study focused on the changes of soil microbial diversity and potential inhibitory effects of heavy metals on soil enzymatic activities at different application rates of Cd and Pb. The soil used for experiments was collected from Beijing and classified as endoaquepts. Pots containing 500 g of the soil with different Cd or/and Pb application rates were incubated for a period of 0, 2, 9, 12 weeks in a glasshouse and the soil samples were analyzed for individual enzymes, including catalase, alkaline phosphatase and dehydrogenase, and the changes of microbial community structure. Results showed that heavy metals slightly inhibited the enzymatic activities in all the samples spiked with heavy metals. The extent of inhibition increased significantly with increasing level of heavy metals, and varied with the incubation periods. The soil bacterial community structure, as determined by polymerase chain reaction-denaturing gradient gel electrophoresis techniques, was different in the contaminated samples as compared to the control. The highest community change was observed in the samples amended with high level of Cd. Positive correlations were observed among the three enzymatic activities, but negative correlations were found between the amounts of the heavy metals and the enzymatic activities.
Ra isotopes in trees: Their application to the estimation of heartwood growth rates and tree ages
Hancock, Gary J.; Murray, Andrew S.; Brunskill, Gregg J.; Argent, Robert M.
2006-12-01
The difficulty in estimating growth rates and ages of tropical and warm-temperate tree species is well known. However, this information has many important environmental applications, including the proper management of native forests and calculating uptake and release of atmospheric carbon. We report the activities of Ra isotopes in the heartwood, sapwood and leaves of six tree species, and use the radial distribution of the 228Ra/226Ra activity ratio in the stem of the tree to estimate the rate of accretion of heartwood. A model is presented in which dissolved Ra in groundwater is taken up by tree roots, translocated to sapwood in a chemically mobile (ion-exchangeable) form, and rendered immobile as it is transferred to heartwood. Uptake of 232Th and 230Th (the parents of 228Ra and 226Ra) is negligible. The rate of heartwood accretion is determined from the radioactive decay of 228Ra (half-life 5.8 years) relative to long-lived 226Ra (half-life 1600 years), and is relevant to growth periods of up to 50 years. By extrapolating the heartwood accretion rate to the entire tree ring record the method also appears to provide realistic estimates of tree age. Eight trees were studied (three of known age, 72, 66 and 35 years), including three Australian hardwood eucalypt species, two mangrove species, and a softwood pine (P. radiata). The method indicates that the rate of growth ring formation is species and climate dependent, varying from 0.7 rings yr-1 for a river red gum (E. camaldulensis) to around 3 rings yr-1 for a tropical mangrove (X. mekongensis).
Minatti, Lorenzo; Nicoletta De Cicco, Pina; Paris, Enio
2014-05-01
In common engineering practice, rating curves are obtained from direct stage-discharge measurements or, more often, from stage measurements coupled with flow simulations. The present work mainly focuses on the latter technique, where stage-measuring gauges are usually installed on bridges with flow conditions likely to be influenced by local geometry constraints. In such cases, backwater flow and flow transition to supercritical state may occur, influencing sediment transport capacity and triggering more intense changes in river morphology. The unsteadiness of the flow hydrograph may play an important role too, according to the velocity of its rising and falling limbs. Nevertheless, the simulations conducted to build a rating curve are often carried out with steady flow and fixed bed conditions where the afore-mentioned effects are not taken into account at all. Numerical simulations with mobile bed and different unsteady flow conditions have been conducted on some real case studies in the rivers of Tuscany (Italy), in order to assess how rating curves change with respect to the "standard" one (that is, the classical steady flow rating curve). A 1D finite volume numerical model (REMo, River Evolution Modeler) has been employed for the simulations. The model solves the 1D Shallow Water equations coupled with the sediments continuity equation in composite channels, where the overbanks are treated with fixed bed conditions while the main channel can either aggrade or be scoured. The model employs an explicit scheme with 2nd order accuracy in both space and time: this allows the correct handling of moderately stiff source terms via a local corrector step. Such capability is very important for the applications of the present work as it allows the modelling of abrupt contractions and jumps in bed bottom elevations which often occur near bridges. The outcomes of the simulations are critically analyzed in order to provide a first insight on the conditions inducing
Effect of beef cattle manure application rate on CH4 and CO2 emissions
Phan, Nhu-Thuc; Kim, Ki-Hyun; Parker, David; Jeon, Eui-Chan; Sa, Jae-Hwan; Cho, Chang-Sang
2012-12-01
In a series of field experiments, emissions of two major greenhouse gases (GHGs), methane (CH4) and carbon dioxide (CO2) were measured using a closed chamber technique in summer 2010 to evaluate the effects of solid beef cattle manure land application techniques. The treatments included a control (C: no manure), two manure application rates (40 and 80 T ha-1), and two injection layers (surface vs. subsurface (5 cm)): (1) 40 T ha-1 on surface (S40), (2) 80 T ha-1 on surface (S80), (3) 40 T ha-1 at subsurface (D40), and (4) 80 T ha-1 at subsurface (D80)). The exchange patterns of CH4 and CO2 in the control were variable and showed both emission and deposition. However, only emissions were seen in the manure treatments. Emissions of CH4 were seen systematically on the ascending order of 5.35 (C), 59.3 (S40), 68.7 (D40), 188 (S80), and 208 μg m-2 h-1 (D80), while those of CO2 also showed a similar trend: 12.9 (C), 37.6 (S40), 55.8 (D40), 82.4 (S80), and 95.4 mg m-2 h-1 (D80). The overall results of our study suggest that the emissions of CH4 and CO2 are affected most noticeably by the differences in the amount of manure application.
Rubik, Beverly
2017-01-01
This study investigated whether short-term exposure to a passive online software application of purported subtle energy technology would affect heart rate variability (HRV) and associated autonomic nervous system measures. This was a randomized, double-blinded, sham-controlled clinical trial (RCT). The study took place in a nonprofit laboratory in Emeryville, California. Twenty healthy, nonsmoking subjects (16 females), aged 40-75 years, participated. Quantum Code Technology(™) (QCT), a purported subtle energy technology, was delivered through a passive software application (Heart+ App) on a smartphone placed heart rate, total power, standard deviation node-to-node, root mean square sequential difference, low frequency to high frequency ratio (LF/HF), low frequency (LF), and high frequency (HF). Paired samples t-tests showed that for the Heart+ App, mean LF/HF decreased (p = 9.5 × 10(-4)), while mean LF decreased in a trend (p = 0.06), indicating reduced sympathetic dominance. Root mean square sequential difference increased for the Heart+ App, showing a possible trend (p = 0.09). Post-pre differences in LF/HF for sham compared with the Heart+ App were also significant (p Heart+ App that was working in the background on an active smartphone untouched by the subjects. This may be the first RCT to show that specific frequencies of a purported non-Hertzian type of subtle energy conveyed by software applications broadcast from personal electronic devices can be bioactive and beneficially impact autonomic nervous system balance.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Cai, J; Jiang, D; Wollenweber, Bernd
2012-01-01
The harmonious combination of malting barley yield, quality and nitrogen (N) use-efficiency under nitrogen (N) rates applications was greatly conducive to production in China. The malting barley cultivar Supi 3 was planted during the growing seasons 2005 and 2006 at two contrasting sites in China......, and decreased with 300 kg N ha−1. Net photosynthetic rate (P N) and the amount of accumulated dry matter distributed into grains showed the same response to N application as grain yield. Grain protein content increased with increasing N application rates. Moreover, based on further analysis of these results...
Ps-LAMBDA: Ambiguity success rate evaluation software for interferometric applications
Verhagen, Sandra; Li, Bofeng; Teunissen, Peter J. G.
2013-04-01
Integer ambiguity resolution is the process of estimating the unknown ambiguities of carrier-phase observables as integers. It applies to a wide range of interferometric applications of which Global Navigation Satellite System (GNSS) precise positioning is a prominent example. GNSS precise positioning can be accomplished anytime and anywhere on Earth, provided that the integer ambiguities of the very precise carrier-phase observables are successfully resolved. As wrongly resolved ambiguities may result in unacceptably large position errors, it is crucial that one is able to evaluate the probability of correct integer ambiguity estimation. This ambiguity success rate depends on the underlying mathematical model as well as on the integer estimation method used. In this contribution, we present the Matlab toolbox Ps-LAMBDA for the evaluation of the ambiguity success rates. It allows users to evaluate all available success rate bounds and approximations for different integer estimators. An assessment of the sharpness of the bounds and approximations is given as well. Furthermore, it is shown how the toolbox can be used to assess the integer ambiguity resolution performance for design and research purposes, so as to study for instance the impact of using different GNSS systems and/or different measurement scenarios.
Choi, T. J. [Keimyung Univ., Taegu (Korea); Kim, S. W. [Fatima Hospital, Taegu (Korea); Kim, O. B.; Lee, H. J.; Won, C. H. [Keimyung Univ., Taegu (Korea); Yoon, S. M. [Dong-a Univ., Pusan (Korea)
2000-04-01
To design and fabricate of the high dose rate source and applicators which are tandem, ovoids and colpostat for OB/Gyn brachytherapy includes the computerized dose planning system. Designed the high dose rate Ir-192 source with nuclide atomic power irradiation and investigated the dose characteristics of fabricated brachysource. We performed the effect of self-absorption and determining the gamma constant and output factor and determined the apparent activity of designed source. he automated computer planning system provided the 2D distribution and 3D includes analysis programs. Created the high dose rate source Ir-192, 10 Ci(370GBq). The effective attenuation factor from the self-absorption and source wall was examined to 0.55 of the activity of bare source and this factor is useful for determination of the apparent activity and gamma constant 4.69 Rcm{sup 2}/mCi-hr. Fabricated the colpostat was investigated the dose distributions of frontal, axial and sagittal plane in intra-cavitary radiation therapy for cervical cancer. The reduce dose at bladder and rectum area was found about 20 % of original dose. The computerized brachytherapy planning system provides the 2-dimensional isodose and 3-D include the dose-volume histogram(DVH) with graphic-user-interface mode. emoted afterloading device was built for experiment of created Ir-192 source with film dosimetry within {+-}1 mm discrepancy. 34 refs., 25 figs., 11 tabs. (Author)
Bahadur, Yasir A; Constantinescu, Camelia; Hassouna, Ashraf H; Eltaher, Maha M; Ghassal, Noor M; Awad, Nesreen A
2015-01-01
To retrospectively compare the potential dosimetric advantages of a multichannel vaginal applicator vs. a single channel one in intracavitary vaginal high-dose-rate (HDR) brachytherapy after hysterectomy, and evaluate the dosimetric advantage of fractional re-planning. We randomly selected 12 patients with endometrial carcinoma, who received adjuvant vaginal cuff HDR brachytherapy using a multichannel applicator. For each brachytherapy fraction, two inverse treatment plans (for central channel and multichannel loadings) were performed and compared. The advantage of fractional re-planning was also investigated. Dose-volume-histogram (DVH) analysis showed limited, but statistically significant difference (p = 0.007) regarding clinical-target-volume dose coverage between single and multichannel approaches. For the organs-at-risk rectum and bladder, the use of multichannel applicator demonstrated a noticeable dose reduction, when compared to single channel, but statistically significant for rectum only (p = 0.0001). For D2cc of rectum, an average fractional dose of 6.1 ± 0.7 Gy resulted for single channel vs. 5.1 ± 0.6 Gy for multichannel. For D2cc of bladder, an average fractional dose of 5 ± 0.9 Gy occurred for single channel vs. 4.9 ± 0.8 Gy for multichannel. The dosimetric benefit of fractional re-planning was demonstrated: DVH analysis showed large, but not statistically significant differences between first fraction plan and fractional re-planning, due to large inter-fraction variations for rectum and bladder positioning and filling. Vaginal HDR brachytherapy using a multichannel vaginal applicator and inverse planning provides dosimetric advantages over single channel cylinder, by reducing the dose to organs at risk without compromising the target volume coverage, but at the expense of an increased vaginal mucosa dose. Due to large inter-fraction dose variations, we recommend individual fraction treatment plan optimization.
2017-01-01
Abstract Objective: This study investigated whether short-term exposure to a passive online software application of purported subtle energy technology would affect heart rate variability (HRV) and associated autonomic nervous system measures. Methods: This was a randomized, double-blinded, sham-controlled clinical trial (RCT). The study took place in a nonprofit laboratory in Emeryville, California. Twenty healthy, nonsmoking subjects (16 females), aged 40–75 years, participated. Quantum Code Technology™ (QCT), a purported subtle energy technology, was delivered through a passive software application (Heart+ App) on a smartphone placed <1 m from subjects who were seated and reading a catalog. HRV was measured for 5 min in triplicate for each condition via finger plethysmography using a Food and Drug Administration medically approved HRV measurement device. Measurements were made at baseline and 35 min following exposure to the software applications. The following parameters were calculated and analyzed: heart rate, total power, standard deviation node-to-node, root mean square sequential difference, low frequency to high frequency ratio (LF/HF), low frequency (LF), and high frequency (HF). Results: Paired samples t-tests showed that for the Heart+ App, mean LF/HF decreased (p = 9.5 × 10–4), while mean LF decreased in a trend (p = 0.06), indicating reduced sympathetic dominance. Root mean square sequential difference increased for the Heart+ App, showing a possible trend (p = 0.09). Post–pre differences in LF/HF for sham compared with the Heart+ App were also significant (p < 0.008) by independent t-test, indicating clinical relevance. Conclusions: Significant beneficial changes in mean LF/HF, along with possible trends in mean LF and root mean square sequential difference, were observed in subjects following 35 min exposure to the Heart+ App that was working in the background on an active smartphone untouched by the subjects
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Nophawan Bunchu
2012-01-01
Full Text Available Hemipyrellia ligurriens (Diptera: Calliphoridae is a forensically important blow fly species presented in many countries. In this study, we determined the morphology of all stages and the developmental rate of H. ligurriens reared under natural ambient conditions in Phitsanulok province, northern Thailand. Morphological features of all stages based on observing under a light microscope were described and demonstrated in order to use for identification purpose. Moreover, development time in each stage was given. The developmental time of H. ligurriens to complete metamorphosis; from egg, larva, pupa to adult, took 270.71 h for 1 cycle of development. The results from this study may be useful not only for application in forensic investigation, but also for study in its biology in the future.
On the symmetric α-stable distribution with application to symbol error rate calculations
Soury, Hamza
2016-12-24
The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.
UWB multi-pulse position modulation for high data-rate wireless application
WANG Ye-qiu; LU Ying-hua; ZHANG Hong-xin; HE Peng-fei; ZHANG Li-kun
2006-01-01
A new ultra-wide band (UWB) modulation scheme called L-ary ultra-wide band time hopping multi-pulse position modulation (UWB-TH-MPPM) is proposed for high data-rate wireless application, which can provide better communication performance. The constant weight code is introduced to construct the MPPM signal and the comparison between MPPM and single pulse position modulation (SPPM) is done in three aspects, namely, power efficiency, bandwidth efficiency, and probability of symbol error, respectively. The theoretical analysis and the numerical results show that when the constant weight code is appropriately chosen, MPPM can achieve lower probability of symbol error and higher power efficiency than SPPM at the cost of more bandwidth under the same condition.The proposed MPPM can be a good candidate in UWB system design.
Adam, Tijjani; Hashim, U.
2017-03-01
Optimum flow in micro channel for sensing purpose is challenging. In this study, The optimizations of the fluid sample flows are made through the design and characterization of the novel microfluidics' architectures to achieve the optimal flow rate in the micro channels. The biocompatibility of the Polydimetylsiloxane (Sylgard 184 silicon elastomer) polymer used to fabricate the device offers avenue for the device to be implemented as the universal fluidic delivery system for bio-molecules sensing in various bio-medical applications. The study uses the following methodological approaches, designing a novel microfluidics' architectures by integrating the devices on a single 4 inches silicon substrate, fabricating the designed microfluidic devices using low-cost solution soft lithography technique, characterizing and validating the flow throughput of urine samples in the micro channels by generating pressure gradients through the devices' inlets. The characterization on the urine samples flow in the micro channels have witnessed the constant flow throughout the devices.
Gustavo Migliorini de Oliveira
2015-02-01
Full Text Available The aim of this paper was to study the role of dose and rate of application, and the effect of concentration of fungicide in the spray solution resulted from the interaction of these factors, in the control of leaf rust and yellow spot of wheat. It was conducted two experiments, the first used the CD 104 cultivar (susceptible to lead rust and yellow spot. The experimental design was an factorial 3 x 3 + untreated control, that involve the factors dose (0,25, 0,30 and 0,35 L.ha-1 and application rate (143, 286 and 429 L.ha-1. The second experiment used the BRS 208 cultivar (resistant to leaf rust and moderately resistant to yellow spot. The experimental design was an factorial 2 x 2 + untreated control, consisting the factors dose (0,2 and 0,3 L.ha-1 and application rate (143 and 286 L.ha-1. The applications were made with a coastal sprayer by CO2, pressure of 250 kPa, XR 110-02 nozzle, which generated an application rate of 143 L.ha-1. The respective rates of each treatment were changed by the number of sprayers per area. It was also used a spore trap denominated Siga, associated with meteorological data and weather forecast, which detected spores of rust and yellow spot before the symptoms in the plants, helping in the identification of disease and in the timing of application. There wasn´t any interaction between dose and rate of application for any of the experiments, therefore, there wasn´t effect of concentration of fungicide in control. The dose and rate of application just influenced in the control of the yellow spot. Higher doses and rates were more effective. However, no difference was observed for yield and hectolitre weight among treatments, except untreated control
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Li, C. J.; Xu, Z. H.; Dong, Z. X.; Shi, S. L.; Zhang, J. G.
2016-01-01
Whole-crop wheat (Triticum aestivum L.) as forage has been extensively used in the world. In this study, the effects of N application rates on the yields, nutritive value and silage quality were investigated. The N application rates were 0, 75, 150, 225, and 300 kg/ha. The research results indicated that the dry matter yield of whole-crop wheat increased significantly with increasing N rate up to 150 kg/ha, and then leveled off. The crude protein content and in vitro dry matter digestibility of whole-crop wheat increased significantly with increasing N up to 225 kg/ha, while they no longer increased at N 300 kg/ha. On the contrary, the content of various fibers tended to decrease with the increase of N application. The content of lactic acid, acetic acid and propionic acid in silages increased with the increase of N rate (psilages with higher N application rates (≥225 kg/ha) was significantly higher than that with lower N application rates (≤150 kg/ha). Whole-crop wheat applied with high levels of N accumulated more nitrate-N. In conclusion, taking account of yields, nutritive value, silage quality and safety, the optimum N application to whole-crop wheat should be about 150 kg/ha at the present experiment conditions. PMID:26954126
A remote sensing and variable rate application system was configured for agricultural aircraft. This combination system has the potential of providing a completely integrated solution for all aspects of aerial site-specific application and includes remote sensing, image processing and georegistratio...
Dale G. Brockway; Kenneth W. Outcalt; R. Neal. Wilkins
1998-01-01
A longleaf pine wiregrass ecosystem in the sandhills of north central Florida, upon which turkey oak gained dominance following a wildfire, was treated with low rate (1.1 or 2.2 kg/ha) applications of the herbicide hexazinone during the 1991 growing season. All applications successfully reduced oak in the overstory and understory, mortality ranging from 83 to 93...
Let's go formative: continuous student ratings with Web 2.0 application Twitter.
Stieger, Stefan; Burger, Christoph
2010-04-01
Student ratings have been a controversial but important method for the improvement of teaching quality during the past several decades. Most universities rely on summative evaluations conducted at the end of a term or course. A formative approach in which each course unit is evaluated may be beneficial for students and teachers but has rarely been applied. This is most probably due to the time constraints associated with various procedures inherent in formative evaluation (numerous evaluations, high amounts of aggregated data, high administrative investment). In order to circumvent these disadvantages, we chose the Web 2.0 Internet application Twitter as evaluation tool and tested whether it is useful for the implementation of a formative evaluation. After a first pilot and subsequent experimental study, the following conclusions were drawn: First, the formative evaluation did not come to the same results as the summative evaluation at the end of term, suggesting that formative evaluations tap into different aspects of course evaluation than summative evaluations do. Second, the results from an offline (i.e., paper-pencil) summative evaluation were identical with those from an online summative evaluation of the same course conducted a week later. Third, the formative evaluation did not influence the ratings of the summative evaluation at the end of the term. All in all, we can conclude that Twitter is a useful tool for evaluating a course formatively (i.e., on a weekly basis). Because of Twitter's simple use and the electronic handling of data, the administrative effort remains small.
Application of a Spatial Intelligent Decision System on Self-Rated Health Status Estimation.
Calzada, Alberto; Liu, Jun; Wang, Hui; Nugent, Chris; Martinez, Luis
2015-11-01
Self- assessed general health status is a commonly-used survey technique since it can be used as a predictor for several public health risks such as mortality, deprivation, and fear of crime or poverty. Therefore, it is a useful alternative measure to help assessing the public health situation of a neighborhood or town, and can be utilized by authorities in many decision support situations related to public health, budget allocation and general policy-making, among others. It can be considered as spatial decision problems, since both data location and spatial relationships make a prominent impact during the decision making process. This paper utilizes a recently-developed spatial intelligent decision system, named, Spatial RIMER(+), to model the self-rated health estimation decision problem using real data in the areas of Northern Ireland, UK. The goal is to learn from past or partial observations on self-rated health status to predict its future or neighborhood behavior and reference it in the map. Three scenarios in line of this goal are discussed in details, i.e., estimation of unknown, downscaling, and predictions over time. They are used to demonstrate the flexibility and applicability of the spatial decision support system and their positive capabilities in terms of accuracy, efficiency and visualization.
Reaction Rate Theory in Coordination Number Space: An Application to Ion Solvation
Roy, Santanu; Baer, Marcel D.; Mundy, Christopher J.; Schenter, Gregory K.
2016-04-14
Understanding reaction mechanisms in many chemical and biological processes require application of rare event theories. In these theories, an effective choice of a reaction coordinate to describe a reaction pathway is essential. To this end, we study ion solvation in water using molecular dynamics simulations and explore the utility of coordination number (n = number of water molecules in the first solvation shell) as the reaction coordinate. Here we compute the potential of mean force (W(n)) using umbrella sampling, predicting multiple metastable n-states for both cations and anions. We find with increasing ionic size, these states become more stable and structured for cations when compared to anions. We have extended transition state theory (TST) to calculate transition rates between n-states. TST overestimates the rate constant due to solvent-induced barrier recrossings that are not accounted for. We correct the TST rates by calculating transmission coefficients using the reactive flux method. This approach enables a new way of understanding rare events involving coordination complexes. We gratefully acknowledge Liem Dang and Panos Stinis for useful discussion. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. SR, CJM, and GKS were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy.
Oyewole, M.; King, J. Y.; Cleveland, D. A.
2015-12-01
Though greenhouse gas emissions (GHGEs) from mineral fertilizer application in agriculture have been well studied, the effect of organic amendment (OA) application rate on GHGEs is not yet understood. Application of multiple OAs can improve different properties that control soil fertility, including nutrient availability, aggregate stability, and water-holding capacity. We measured nitrous oxide (N2O), carbon dioxide (CO2), and methane (CH4) flux at an organic farm in Goleta, CA in order to understand how OA application rate affects GHGEs and crop yield from agricultural soils. Based on management practices in the region, we asked farm managers to establish high compost (HC) and low compost (LC) treatments during the growing season of an annual crop (18.2, 9.13 Mg ha-1, respectively), and we measured GHGEs in beds and furrows using static chambers. Organic fertilizer (672 kg ha-1) was applied equally to HC and LC beds six weeks after compost application. Overall, emissions of N2O and CO2 were higher in HC than LC, but yield-scaled emissions were higher in LC. Importantly, treatment differences in both N2O and CO2 emissions were not apparent until after mid-season fertilizer application. Net CH4 uptake was higher in HC than LC in the furrows, but there was no difference in the beds. Our data suggest that high compost application rates likely increased SOM mineralization, soil water content, and nitrification and denitrification rates in HC relative to LC, which led to higher N2O emissions during the growing season. Fertilization primed SOM decomposition and increased soil respiration, which led to increased CO2 emissions. Our results suggest that improved management of application rate and timing during use of multiple OAs could reduce GHGEs while maintaining high crop yield. Understanding the mechanisms by which OA application rates alter the balance between GHGEs and yield is an important step toward reducing agriculture's contribution to climate change through
张戈
2015-01-01
We studies the issue raised by Reference[3],according to appropriate assumptions and other smooth conditions,With a more simple method,Proved that asymptotic existence of quasi likelihood equations in Quasi-likelihood nonlinear model ,and proved the convergence rate of the solution.%在适当假定及其它一些光滑条件下,用更为简便的方法证明了拟似然非线性模型的拟似然方程解的渐近存在性,并且求出了该解收敛于真值的速度.
Marta Ferrater
2015-11-01
Full Text Available Seismic hazard assessment of strike-slip faults is based partly on the identification and mapping of landforms laterally offset due to fault activity. The characterization of these features affected by slow-moving faults is challenging relative to studies emphasizing rapidly slipping faults. We propose a methodology for scoring fault offsets based on subjective and objective qualities. We apply this methodology to the Alhama de Murcia fault (SE Iberian Peninsula where we identify 138 offset features that we mapped on a high-resolution (0.5 × 0.5 m pixel size Digital Elevation Model (DEM. The amount of offset, the uncertainty of the measurement, the subjective and objective qualities, and the parameters that affect objective quality are independent variables, suggesting that our methodological scoring approach is good. Based on the offset measurements and qualifications we calculate the Cumulative Offset Probability Density (COPD for the entire fault and for each fault segment. The COPD for the segments differ from each other. Tentative interpretation of the COPDs implies that the slip rate varies from one segment to the other (we assume that channels with the same amount of offset were incised synchronously. We compare the COPD with climate proxy curves (aligning using the very limited age control to test if entrenchment events are coincident with climatic changes. Channel incision along one of the traces in Lorca-Totana segment may be related to transitions from glacial to interglacial periods.
Khuziakhmetov, Anvar N.; Amin, Azimi Sayed
2015-01-01
The aim of the present research is the study of the application rate of learning technologies in KFU and VIIU electronic courses to improve students' self-regulation. For this aim, this research was based on Kitsantas research, the rate of the use of effective learning technologies in students' self-regulation in electronic courses in these two…
Optimal nitrogen fertilizer rate for corn can vary substantially within and among fields. Current N management practices do not address this variability. Crop reflectance sensors offer the potential to diagnose crop N need and control N application rates at a fine spatial scale. Our objective was...
Yen, Hong-Wei; Liu, Yi Xian
2014-08-01
The high cost of microbial oils produced from oleaginous microorganisms is the major obstacle to commercial production. In this study, the operation of an airlift bioreactor is examined for the cultivation of oleaginous yeast-Rhodotorula glutinis, due to the low process cost. The results suggest that the use of a high aeration rate could enhance cell growth. The maximum biomass concentration of 25.40 g/L was observed in the batch with a 2.0 vvm aeration rate. In addition, a higher aeration rate of 2.5 vvm could achieve the maximum growth rate of 0.46 g/L h, about twice the 0.22 g/L h obtained in an agitation tank. However, an increase in tank pressure instead of the aeration rate did not enhance cell growth. The operation of airlift bioreactor described in this work has the advantages of simple operation and low energy consumption, thus making it suitable for the accumulation of microbial oils.
Fei LENG
2008-09-01
Full Text Available This paper discusses the seismic analysis of concrete dams with consideration of material nonlinearity. Based on a consistent rate-dependent model and two thermodynamics-based models, two thermodynamics-based rate-dependent constitutive models were developed with consideration of the influence of the strain rate. They can describe the dynamic behavior of concrete and be applied to nonlinear seismic analysis of concrete dams taking into account the rate sensitivity of concrete. With the two models, a nonlinear analysis of the seismic response of the Koyna Gravity Dam and the Dagangshan Arch Dam was conducted. The results were compared with those of a linear elastic model and two rate-independent thermodynamics-based constitutive models, and the influences of constitutive models and strain rate on the seismic response of concrete dams were discussed. It can be concluded from the analysis that, during seismic response, the tensile stress is the control stress in the design and seismic safety evaluation of concrete dams. In different models, the plastic strain and plastic strain rate of concrete dams show a similar distribution. When the influence of the strain rate is considered, the maximum plastic strain and plastic strain rate decrease.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Li, C J; Xu, Z H; Dong, Z X; Shi, S L; Zhang, J G
2016-08-01
Whole-crop wheat (Triticum aestivum L.) as forage has been extensively used in the world. In this study, the effects of N application rates on the yields, nutritive value and silage quality were investigated. The N application rates were 0, 75, 150, 225, and 300 kg/ha. The research results indicated that the dry matter yield of whole-crop wheat increased significantly with increasing N rate up to 150 kg/ha, and then leveled off. The crude protein content and in vitro dry matter digestibility of whole-crop wheat increased significantly with increasing N up to 225 kg/ha, while they no longer increased at N 300 kg/ha. On the contrary, the content of various fibers tended to decrease with the increase of N application. The content of lactic acid, acetic acid and propionic acid in silages increased with the increase of N rate (papplication rates (≥225 kg/ha) was significantly higher than that with lower N application rates (≤150 kg/ha). Whole-crop wheat applied with high levels of N accumulated more nitrate-N. In conclusion, taking account of yields, nutritive value, silage quality and safety, the optimum N application to whole-crop wheat should be about 150 kg/ha at the present experiment conditions.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Pellizzon, A. Cassio A.; Miziara, Daniela; Lima, Flavia Pedroso de; Miziara, Miguel
2014-07-01
Purpose: advances in technology and the commercial production of Leipzig applicators allowed High Dose Rate after-load brachytherapy (HDR-BT) to address a number of the challenges associated with the delivery of superficial radiation to treat localized non melanoma skin cancer (NMSK). We reviewed our uni-institutional experience on the treatment of NMSK with HDR-BT. Methods: data were collected retrospectively from patients attending the Radiation Oncology Department at AV Carvalho Insitute, Sao Paulo, Brazil. HDR-BT was done using the stepping source HDR 192Ir Microselectron (Nucletron BV). The planning target volume consisted of the macroscopic lesion plus a 5mm to 10mm margin.The depth of treatment was 0.5 cm in smaller (< 2.0 cm) tumors and 10 to 15 mm for lesions bigger than that. Results: Thirteen patients were treated with HDR-BT from June, 2007 to June 2013. The median age and follow up time were 72 (38-90) years old and 36 (range, 7-73) months, respectively. There a predominance of males (61.5%) and of patients referred for adjuvant treatment due positive surgical margins or because they have had only a excision biopsy without safety margins (61.5%). Six (46.2%) patients presented with squamous cell carcinoma and 7 (53.8%) patients presented with basal cell carcinoma. The median tumor size was 20 (range, 5-42) mm. Patients were treated with a median total dose of 40 Gy (range, 20 -60), given in 10 (range, 2-15) fractions, given daily or twice a week. All patients responded very well to treatment and only one patient has failed locally so far, after 38 months of the end of the irradiation. The crude and actuarial 3-year local control rates were 100% and 80%, respectively. Moist desquamation, grade 2 RTOG, was observed in 4 (30.8%) patients. Severe late complication, radiation-induced dyspigmentation, occurred in 2 patients and 1 of the patients also showed telangiectasia in the irradiated area. The cosmetic result was considered good in 84% (11/13) patients
Hu Yang
Full Text Available A Soil-Plant Analysis Development (SPAD chlorophyll meter can be used as a simple tool for evaluating N concentration of the leaf and investigating the combined effects of nitrogen rate and leaf age on N distribution. We conducted experiments in a paddy field over two consecutive years (2008-2009 using rice plants treated with six different N application levels. N distribution pattern was determined by SPAD readings based on the temporal dynamics of N concentrations in individual leaves. At 62 days after transplantation (DAT in 2008 and DAT 60 in 2009, leaf SPAD readings increased from the upper to lower in the rice canopy that received N levels of 150 to 375 kg ha(-1The differences in SPAD readings between the upper and lower leaf were larger under higher N application rates. However, as plants grew, this atypical distribution of SPAD readings in canopy leaf quickly reversed to the general order. In addition, temporal dynamics of the leaf SPAD readings (N concentrations were fitted to a piecewise function. In our model, changes in leaf SPAD readings were divided into three stages: growth, functioning, and senescence periods. The leaf growth period lasted approximately 6 days, and cumulative growing days were not affected by N application rates. The leaf functioning period was represented with a relatively stable SPAD reading related to N application rate, and cumulative growing days were extended with increasing N application rates. A quadratic equation was utilized to describe the relationship between SPAD readings and leaf age during the leaf senescence period. The rate of decrease in SPAD readings increased with the age of leaves, but the rate was slowed by N application. As leaves in the lower canopy were physiologically older than leaves in the upper canopy, the rate of decrease in SPAD readings was faster in the lower leaves.
Gervais, V.
2004-11-01
The subject of this report is the study and simulation of a model describing the infill of sedimentary basins on large scales in time and space. It simulates the evolution through time of the sediment layer in terms of geometry and rock properties. A parabolic equation is coupled to an hyperbolic equation by an input boundary condition at the top of the basin. The model also considers a unilaterality constraint on the erosion rate. In the first part of the report, the mathematical model is described and particular solutions are defined. The second part deals with the definition of numerical schemes and the simulation of the model. In the first chap-ter, finite volume numerical schemes are defined and studied. The Newton algorithm adapted to the unilateral constraint used to solve the schemes is given, followed by numerical results in terms of performance and accuracy. In the second chapter, a preconditioning strategy to solve the linear system by an iterative solver at each Newton iteration is defined, and numerical results are given. In the last part, a simplified model is considered in which a variable is decoupled from the other unknowns and satisfies a parabolic equation. A weak formulation is defined for the remaining coupled equations, for which the existence of a unique solution is obtained. The proof uses the convergence of a numerical scheme. (author)
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Yulin LIAO; Xiangmin RONG; Shengxian ZHENG; Qiang LIU; Meirong FAN; Jianwei PENG; Guixian XIE
2009-01-01
Radishes (Raphanus sativus L.) were grown in plastic pots in a screenhouse to investigate the influences of nitrogen fertilizer application rates (NFAR) on yield, nitrate content, nitrate reductase activity (NR), nutrition quality, and nitrogen recovery efficiency (NRE) at commercial mature stage. Five N-rate treatments, 0.644, 0.819, 0.995, 1.170, and 1.346 g·por-1, were set up in the screenhouse pot experiments, and nitrogen fertilizer (unlabeled N and l5N-labeled fertilizer) was applied as basal dressing and topdressing, respectively. The results indicated that the fresh and dry weight yields of radish increased with the increase of NFAR at the range of 0.099 to 0.180g N-kg-1 soil, decreased at 0.207 g N-kg-1 soil, and accordingly there was a significant quadratic relationship between the fresh and dry weight yields of radish and the NFAR. At the high addition of urea-N fertilizer, the nitrate content accumulated in the fleshy roots and leaves due to the decline in NR activity. From 0.644 to 0.819 g N-por1 NR increased most rapidly, the highest NR activity occurred at 0.819 g N-por-1, and the lowest NR activity happened at 1.346 g N-por-1. Soluble sugar and ascorbic acid initially increased to the highest value and then decreased, and, contrarily, crude fiber rapidly decreased with the increase of NFAR. Total N uptake (TNU), N derived from fertilizer (Ndff), and N derived from soil (Ndfs) in radish increased, except that Ndfs relatively and slightly decreased at the rate of 0.207 g N-kg'soil. The ratio of Ndff to TNU increased, but the ratio of Ndfs to TNU as well as NRE of N fertilizer decreased with the increase of NFAR. Therefore, the appropriate NFAR should be preferably recommended for improving the yields and nutrition qualities of radish and NRE of N fertilizer.
Flatt, Andrew A; Esco, Michael R
2013-12-18
The purpose of this investigation was to cross-validate the ithlete™ heart rate variability smart phone application with an electrocardiograph for determining ultra-short-term root mean square of successive R-R intervals. The root mean square of successive R-R intervals was simultaneously determined via electrocardiograph and ithlete™ at rest in twenty five healthy participants. There were no significant differences between the electrocardiograph and ithlete™ derived root mean square of successive R-R interval values (p > 0.05) and the correlation was near perfect (r = 0.99, p < 0.001). In addition, the ithlete™ revealed a Standard Error of the Estimate of 1.47 and Bland Altman plot showed that the limits of agreement ranged from 2.57 below to 2.63 above the constant error of -0.03. In conclusion, the ithlete™ appeared to provide a suitably accurate measure of root mean square of successive R-R intervals when compared to the electrocardiograph measures obtained in the laboratory within the current sample of healthy adult participants. The current study lays groundwork for future research determining the efficacy of ithlete™ for reflecting athletic training status over a chronic conditioning period.
Rate-gyro-integral constraint for ambiguity resolution in GNSS attitude determination applications.
Zhu, Jiancheng; Li, Tao; Wang, Jinling; Hu, Xiaoping; Wu, Meiping
2013-06-21
In the field of Global Navigation Satellite System (GNSS) attitude determination, the constraints usually play a critical role in resolving the unknown ambiguities quickly and correctly. Many constraints such as the baseline length, the geometry of multi-baselines and the horizontal attitude angles have been used extensively to improve the performance of ambiguity resolution. In the GNSS/Inertial Navigation System (INS) integrated attitude determination systems using low grade Inertial Measurement Unit (IMU), the initial heading parameters of the vehicle are usually worked out by the GNSS subsystem instead of by the IMU sensors independently. However, when a rotation occurs, the angle at which vehicle has turned within a short time span can be measured accurately by the IMU. This measurement will be treated as a constraint, namely the rate-gyro-integral constraint, which can aid the GNSS ambiguity resolution. We will use this constraint to filter the candidates in the ambiguity search stage. The ambiguity search space shrinks significantly with this constraint imposed during the rotation, thus it is helpful to speeding up the initialization of attitude parameters under dynamic circumstances. This paper will only study the applications of this new constraint to land vehicles. The impacts of measurement errors on the effect of this new constraint will be assessed for different grades of IMU and current average precision level of GNSS receivers. Simulations and experiments in urban areas have demonstrated the validity and efficacy of the new constraint in aiding GNSS attitude determinations.
A “twisted” microfluidic mixer suitable for a wide range of flow rate applications
Sivashankar, Shilpa; Agambayev, Sumeyra; Mashraei, Yousof; Li, Er Qiang; Thoroddsen, Sigurdur T.; Salama, Khaled Nabil
2016-01-01
This paper proposes a new “twisted” 3D microfluidic mixer fabricated by a laser writing/microfabrication technique. Effective and efficient mixing using the twisted micromixers can be obtained by combining two general chaotic mixing mechanisms: splitting/recombining and chaotic advection. The lamination of mixer units provides the splitting and recombination mechanism when the quadrant of circles is arranged in a two-layered serial arrangement of mixing units. The overall 3D path of the microchannel introduces the advection. An experimental investigation using chemical solutions revealed that these novel 3D passive microfluidic mixers were stable and could be operated at a wide range of flow rates. This micromixer finds application in the manipulation of tiny volumes of liquids that are crucial in diagnostics. The mixing performance was evaluated by dye visualization, and using a pH test that determined the chemical reaction of the solutions. A comparison of the tornado-mixer with this twisted micromixer was made to evaluate the efficiency of mixing. The efficiency of mixing was calculated within the channel by acquiring intensities using ImageJ software. Results suggested that efficient mixing can be obtained when more than 3 units were consecutively placed. The geometry of the device, which has a length of 30 mm, enables the device to be integrated with micro total analysis systems and other lab-on-chip devices. PMID:27453767
A "twisted" microfluidic mixer suitable for a wide range of flow rate applications.
Sivashankar, Shilpa; Agambayev, Sumeyra; Mashraei, Yousof; Li, Er Qiang; Thoroddsen, Sigurdur T; Salama, Khaled Nabil
2016-05-01
This paper proposes a new "twisted" 3D microfluidic mixer fabricated by a laser writing/microfabrication technique. Effective and efficient mixing using the twisted micromixers can be obtained by combining two general chaotic mixing mechanisms: splitting/recombining and chaotic advection. The lamination of mixer units provides the splitting and recombination mechanism when the quadrant of circles is arranged in a two-layered serial arrangement of mixing units. The overall 3D path of the microchannel introduces the advection. An experimental investigation using chemical solutions revealed that these novel 3D passive microfluidic mixers were stable and could be operated at a wide range of flow rates. This micromixer finds application in the manipulation of tiny volumes of liquids that are crucial in diagnostics. The mixing performance was evaluated by dye visualization, and using a pH test that determined the chemical reaction of the solutions. A comparison of the tornado-mixer with this twisted micromixer was made to evaluate the efficiency of mixing. The efficiency of mixing was calculated within the channel by acquiring intensities using ImageJ software. Results suggested that efficient mixing can be obtained when more than 3 units were consecutively placed. The geometry of the device, which has a length of 30 mm, enables the device to be integrated with micro total analysis systems and other lab-on-chip devices.
High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications
Yates, G.J.; King, N.S.P.
1994-08-01
This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.
Mayeur, Jason R.; Mourad, Hashem M.; Luscher, Darby J.; Hunter, Abigail; Kenamond, Mark A.
2016-05-01
This paper details a numerical implementation of a single crystal plasticity model with dislocation transport for high strain rate applications. Our primary motivation for developing the model is to study the influence of dislocation transport and conservation on the mesoscale response of metallic crystals under extreme thermo-mechanical loading conditions (e.g. shocks). To this end we have developed a single crystal plasticity theory (Luscher et al (2015)) that incorporates finite deformation kinematics, internal stress fields caused by the presence of geometrically necessary dislocation gradients, advection equations to model dislocation density transport and conservation, and constitutive equations appropriate for shock loading (equation of state, drag-limited dislocation velocity, etc). In the following, we outline a coupled finite element-finite volume framework for implementing the model physics, and demonstrate its capabilities in simulating the response of a [1 0 0] copper single crystal during a plate impact test. Additionally, we explore the effect of varying certain model parameters (e.g. mesh density, finite volume update scheme) on the simulation results. Our results demonstrate that the model performs as intended and establishes a baseline of understanding that can be leveraged as we extend the model to incorporate additional and/or refined physics and move toward a multi-dimensional implementation.
A “twisted” microfluidic mixer suitable for a wide range of flow rate applications
Sivashankar, Shilpa
2016-06-27
This paper proposes a new “twisted” 3D microfluidic mixer fabricated by a laser writing/microfabrication technique. Effective and efficient mixing using the twisted micromixers can be obtained by combining two general chaotic mixing mechanisms: splitting/recombining and chaotic advection. The lamination of mixer units provides the splitting and recombination mechanism when the quadrant of circles is arranged in a two-layered serial arrangement of mixing units. The overall 3D path of the microchannel introduces the advection. An experimental investigation using chemical solutions revealed that these novel 3D passive microfluidic mixers were stable and could be operated at a wide range of flow rates. This micromixer finds application in the manipulation of tiny volumes of liquids that are crucial in diagnostics. The mixing performance was evaluated by dye visualization, and using a pH test that determined the chemical reaction of the solutions. A comparison of the tornado-mixer with this twisted micromixer was made to evaluate the efficiency of mixing. The efficiency of mixing was calculated within the channel by acquiring intensities using ImageJ software. Results suggested that efficient mixing can be obtained when more than 3 units were consecutively placed. The geometry of the device, which has a length of 30 mm, enables the device to be integrated with micro total analysis systems and other lab-on-chip devices.
Grigoriev, I. S.; Grigoriev, K. G.
2003-05-01
The necessary first-order conditions of strong local optimality (conditions of maximum principle) are considered for the problems of optimal control over a set of dynamic systems. To derive them a method is suggested based on the Lagrange principle of removing constraints in the problems on a conditional extremum in a functional space. An algorithm of conversion from the problem of optimal control of an aggregate of dynamic systems to a multipoint boundary value problem is suggested for a set of systems of ordinary differential equations with the complete set of conditions necessary for its solution. An example of application of the methods and algorithm proposed is considered: the solution of the problem of constructing the trajectories of a spacecraft flight at a constant altitude above a preset area (or above a preset point) of a planet's surface in a vacuum (for a planet with atmosphere beyond the atmosphere). The spacecraft is launched from a certain circular orbit of a planet's satellite. This orbit is to be determined (optimized). Then the satellite is injected to the desired trajectory segment (or desired point) of a flyby above the planet's surface at a specified altitude. After the flyby the satellite is returned to the initial circular orbit. A method is proposed of correct accounting for constraints imposed on overload (mixed restrictions of inequality type) and on the distance from the planet center: extended (nonpointlike) intermediate (phase) restrictions of the equality type.
Calculation of Maximum Waste Heat and Recovery Rate of Liquid and Gas Fuels%液气燃料烟气的最大余热量与节能率计算研究
丛永杰
2016-01-01
The consumption of various liqui d oil and gas fuel grows rapidly in Chinese energy structure. The discharged flue's temperature is generally 160℃ ~180℃ after these fuels are combusted. This part of energy can be used as secondary energy, though whose grade is low. A lot of H elements are in the form of liquid and gas fuels, and the vapor is the flue's main ingredi-ents. In this paper, the waste heat quantity and recovery rate of 0# light diesel oil and natural gas's flue is calculated, whose tem-perature is from 180℃ to 25℃ at the condition of 1 atm. In the 0# light diesel's flue, the residual heat's proportion of the vapor's heat is about 55. 08%. In the natural gas's flue, which proportion is about 79. 41%. Moreover, the vapor's latent heat is about 3/4. Therefore, recovering the latent heat of vapor is of great significance for the heat recovery of the low temperature waste heat.%在中国能源结构中,燃油与天然气所占比例迅速上升.燃烧后排烟温度一般为160℃~180℃,仍含有较多能量,可以二次利用.本文通过对液、气体燃料中具有代表性的0号轻质柴油及天然气烟气的余热量与节能率进行计算,发现低温烟气余热中的水蒸气余热量占有很大比例,柴油烟气为55.08%,天热气烟气为79.41%.回收烟气余热,尤其是其中水蒸汽的潜热对低温烟气的热回收具有重要意义.若有效回收利用,既是对一次能源的二次利用,更符合"十三五"期间国家节能减排的相关政策要求.
The lone inventor: low success rates and common errors associated with pro-se patent applications.
Kate S Gaudry
Full Text Available A pro-se patent applicant is an inventor who chooses to represent himself while pursuing ("prosecuting" a patent application. To the author's knowledge, this paper is the first empirical study addressing how applications filed by pro-se inventors fare compared to applications in which inventors were represented by patent attorneys or agents. The prosecution history of 500 patent applications filed at the United States Patent and Trademark Office were analyzed: inventors were represented by a patent professional for 250 of the applications ("represented applications" but not in the other 250 ("pro-se applications". 76% of the pro-se applications became abandoned (not issuing as a patent, as compared to 35% of the represented applications. Further, among applications that issued as patents, pro-se patents' claims appear to be narrower and therefore of less value than claims in the represented patent set. Case-specific data suggests that a substantial portion of pro-se applicants unintentionally abandon their applications, terminate the examination process relatively early, and/or fail to take advantage of interview opportunities that may resolve issues stalling allowance of the application.
The lone inventor: low success rates and common errors associated with pro-se patent applications.
Gaudry, Kate S
2012-01-01
A pro-se patent applicant is an inventor who chooses to represent himself while pursuing ("prosecuting") a patent application. To the author's knowledge, this paper is the first empirical study addressing how applications filed by pro-se inventors fare compared to applications in which inventors were represented by patent attorneys or agents. The prosecution history of 500 patent applications filed at the United States Patent and Trademark Office were analyzed: inventors were represented by a patent professional for 250 of the applications ("represented applications") but not in the other 250 ("pro-se applications"). 76% of the pro-se applications became abandoned (not issuing as a patent), as compared to 35% of the represented applications. Further, among applications that issued as patents, pro-se patents' claims appear to be narrower and therefore of less value than claims in the represented patent set. Case-specific data suggests that a substantial portion of pro-se applicants unintentionally abandon their applications, terminate the examination process relatively early, and/or fail to take advantage of interview opportunities that may resolve issues stalling allowance of the application.
An Application of Durkheim's Theory of Suicide to Prison Suicide Rates in the United States
Tartaro, Christine; Lester, David
2005-01-01
E. Durkheim (1897) suggested that the societal rate of suicide might be explained by societal factors, such as marriage, divorce, and birth rates. The current study examined male prison suicide rates and suicide rates for men in the total population in the United States and found that variables based on Durkheim's theory of suicide explained…
An Application of Durkheim's Theory of Suicide to Prison Suicide Rates in the United States
Tartaro, Christine; Lester, David
2005-01-01
E. Durkheim (1897) suggested that the societal rate of suicide might be explained by societal factors, such as marriage, divorce, and birth rates. The current study examined male prison suicide rates and suicide rates for men in the total population in the United States and found that variables based on Durkheim's theory of suicide explained…
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Emission of volatile organic compounds as affected by rate of application of cattle manure
Beef cattle manure can serve as a valuable nutrient source for crop production. However, emissions of volatile organic compounds (VOCs) following land application may pose a potential off-site odor concern. This study was conducted to evaluate the effects of land application method, N- application...
魏新华; 蒋杉; 张进敏; 李青林
2013-01-01
In order to study the application rate control characteristics of blended pulse variable rate application system, a blended pulse variable rate application system was constructed based on a commercial boom sprayer and by integration of high speed direct acting solenoid valves (type 6013) , hollow cone nozzles (type TR80 - 05) , pilot-operated proportional relief valve (type DBEE6 - 1X/50) and a self-designed PWM-based variable rate application controller. Influences of diaphragm pump ' s input shaft rotating speed, nozzle positions, spray pressure, PWM signal frequency and duty cycle on nozzle's spray flow rate were tested. Computational models of the nozzle' s spray flow rate under spray pressure of 0. 2 MPa, 0. 3 MPa and 0. 4 MPa were fitted through monothetic linear regression, and corresponding control models of application rate were deduced. Application rate control characteristics testing experimentation of the blended pulse variable rate application system was performed too. Test results show that influences of nozzle positions and PWM signal frequency on the nozzle' s spray flow rate are very little, spray pressure and PWM signal duty cycle have crucial influences on the nozzle' s spray flow rate, the ratio of flow regulation is about 10: 1 with combine of the solenoid valve and the nozzle used in the system, the control error of spray flow rate is less than ±4% when objective spray flow rate is greater than 0. 3 L/min , and the control error of application rate is less than ±6% .%采用6013型直动式高速电磁阀、TR80-05型圆锥雾喷头、DBEE6-1X/50型先导式比例溢流阀和自制的PWM变量喷施控制器,设计了一套PWM间歇喷雾式变量喷施系统.就隔膜泵输入轴转速、喷头位置、喷雾压力、PWM控制信号频率和占空比等因素对喷头喷雾流量的影响进行了试验测试,采用单因素线性拟合法建立了0.2、0.3和0.4 MPa喷雾压力下TR80-05型圆锥雾喷头的喷雾流量计算模型,得出
Healthy adults maximum oxygen uptake prediction from a six minute walking test
Nury Nusdwinuringtyas
2011-08-01
Full Text Available Background: A parameter is needed in medical activities or services to determine functional capacity. This study is aimed to produce functional capacity parameter for Indonesian adult as maximum O2.Methods: This study used 123 Indonesian healthy adult subjects (58 males and 65 females with a sedentary lifestyle, using a cross-sectional method.Results: Designed by using the followings: distance, body height, body weight, sex, age, maximum heart rate of six minute walking test and lung capacity (FEV and FVC, the study revealed a good correlation (except body weight with maximum O2. Three new formulas were proposed, which consisted of eight, six, and five variable respectively. Test of the new formula gave result of maximum O2 that is relevant to the golden standard maximum O2 using Cosmed® C-Pex.Conclusion: The Nury formula is the appropriate predictor of maximum oxygen uptake for healthy Indonesians adult as it is designed using Indonesian subjects (Mongoloid compared to the Cahalin’s formula (Caucasian. The Nury formula which consists of five variables is more applicable because it does not require any measurement tools neither specific competency. (Med J Indones 2011;20:195-200Keywords: maximum O2, Nury’s formula, six minute walking test
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
Application of validated radiation model in flame spread rate over solid fuels
Ivisic, Ivan
In this thesis the radiative effects of opposed flow flames spreading over solid fuels are discussed as well as the coupling of a radiation and CFD program. The coupled programs are used to show the radiative heat transfer mechanisms and how they affect the flame globally. A radiation program is used to calculate radiation properties of the flame such as the heat flux distribution, net heat flow, and mean Plank absorptivity constant for a particular flame. The radiation program imports the temperature fields from a CFD program. Trends in the mean Plank absorptivity constant with varying ambient conditions are analyzed and an application of the radiation program to simulate a physical radiometer is demonstrated for a test case. The CFD program can import radiation results to help improve the accuracy of the simulation. A script was written to automate the update process to produce more accurate results for flame simulations. Flux distributions, stability and relative error are analyzed to show the coupled programs are producing results within an acceptable error. Trends in error and stability are discussed and stable regions with low enough error are determined. The coupled programs are used to gather data on flame spread rate and find differences in flame structure and properties of neglecting certain radiation mechanisms. No radiation included produced the hottest fastest moving flame, while no gas to surface radiation produced the coolest flame. Including the gas to surface radiation produced a slightly hotter faster moving flame. This trend was studied across different opposed flow velocities and sample widths. The radiative heat fluxes are analyzed for the cases as well. All the flame simulations in this thesis were run for a microgravity, 21% oxygen, and PMMA fuel.
Shwetha, Bondel [Department of Radiation Physics, Kidwai, Memorial Institute of Oncology, Bangalore (India); Ravikumar, Manickam, E-mail: drravikumarm@gmail.com [Department of Radiation Physics, Kidwai, Memorial Institute of Oncology, Bangalore (India); Supe, Sanjay S.; Sathiyan, Saminathan [Department of Radiation Physics, Kidwai, Memorial Institute of Oncology, Bangalore (India); Lokesh, Vishwanath [Department of Radiotherapy, Kidwai, Memorial Institute of Oncology, Bangalore (India); Keshava, Subbarao L. [Department of Radiation Physics, Kidwai, Memorial Institute of Oncology, Bangalore (India)
2012-04-01
Various treatment planning systems are used to design plans for the treatment of cervical cancer using high-dose-rate brachytherapy. The purpose of this study was to make a dosimetric comparison of the 2 treatment planning systems from Varian medical systems, namely ABACUS and BrachyVision. The dose distribution of Ir-192 source generated with a single dwell position was compared using ABACUS (version 3.1) and BrachyVision (version 6.5) planning systems. Ten patients with intracavitary applications were planned on both systems using orthogonal radiographs. Doses were calculated at the prescription points (point A, right and left) and reference points RU, LU, RM, LM, bladder, and rectum. For single dwell position, little difference was observed in the doses to points along the perpendicular bisector. The mean difference between ABACUS and BrachyVision for these points was 1.88%. The mean difference in the dose calculated toward the distal end of the cable by ABACUS and BrachyVision was 3.78%, whereas along the proximal end the difference was 19.82%. For the patient case there was approximately 2% difference between ABACUS and BrachyVision planning for dose to the prescription points. The dose difference for the reference points ranged from 0.4-1.5%. For bladder and rectum, the differences were 5.2% and 13.5%, respectively. The dose difference between the rectum points was statistically significant. There is considerable difference between the dose calculations performed by the 2 treatment planning systems. It is seen that these discrepancies are caused by the differences in the calculation methodology adopted by the 2 systems.
Shwetha, Bondel; Ravikumar, Manickam; Supe, Sanjay S; Sathiyan, Saminathan; Lokesh, Vishwanath; Keshava, Subbarao L
2012-01-01
Various treatment planning systems are used to design plans for the treatment of cervical cancer using high-dose-rate brachytherapy. The purpose of this study was to make a dosimetric comparison of the 2 treatment planning systems from Varian medical systems, namely ABACUS and BrachyVision. The dose distribution of Ir-192 source generated with a single dwell position was compared using ABACUS (version 3.1) and BrachyVision (version 6.5) planning systems. Ten patients with intracavitary applications were planned on both systems using orthogonal radiographs. Doses were calculated at the prescription points (point A, right and left) and reference points RU, LU, RM, LM, bladder, and rectum. For single dwell position, little difference was observed in the doses to points along the perpendicular bisector. The mean difference between ABACUS and BrachyVision for these points was 1.88%. The mean difference in the dose calculated toward the distal end of the cable by ABACUS and BrachyVision was 3.78%, whereas along the proximal end the difference was 19.82%. For the patient case there was approximately 2% difference between ABACUS and BrachyVision planning for dose to the prescription points. The dose difference for the reference points ranged from 0.4-1.5%. For bladder and rectum, the differences were 5.2% and 13.5%, respectively. The dose difference between the rectum points was statistically significant. There is considerable difference between the dose calculations performed by the 2 treatment planning systems. It is seen that these discrepancies are caused by the differences in the calculation methodology adopted by the 2 systems.
Application of a Multi-Agent System (MAS) to Rational Credit Rating
无
2006-01-01
A Multi-Agent System (MAS) is a promising approach to build complex system. This paper introduces the research of the Inner-Enterprise Credit Rating MAS (IECRMAS). To raise the ratingaccuracy, we not only consider the rating-target's information, but also focus on the evaluators' feature information and propose the rational rating-group formation algorithm based on an anti-bias measurement of the group. We also propose the rational rating individual, which consists of the evaluator and the assistant rating agent. A rational group formation protocol is designed to coordinate autonomous agents to perform the rating job.
Peart Daniel J.
2015-08-01
Full Text Available Study aim: the aim of this study was to compare the accuracy of a contactless photoplethysmographic mobile application (CPA to record post-exercise heart rate and estimate maximal aerobic capacity after the Queen’s College Step Test. It was hypothesised that the CPA may present a cost effective heart rate measurement tool for educators and practitioners with limited access to specialised laboratory equipment.
Seyed Mohsen Hosseini Daghigh
2012-03-01
Full Text Available Introduction Interaluminal brachytherapy is one of the important methods of esophageal cancer treatment. The effect of applicator attenuation is not considered in dose calculation method released by AAPM-TG43. In this study, the effect of High-Dose Rate (HDR brachytherapy esophageal applicator on dose distribution was surveyed in HDR brachytherapy. Materials and Methods A cylindrical PMMA phantom was built in order to be inserted by various sizes of esophageal applicators. EDR2 films were placed at 33 mm from Ir-192 source and irradiated with 1.5 Gy after planning using treatment planning system for all applicators. Results The results of film dosimetry in reference point for 6, 8, 10, and 20 mm applicators were 1.54, 1.53, 1.48, and 1.50 Gy, respectively. The difference between practical and treatment planning system results was 0.023 Gy (
44 CFR 208.12 - Maximum Pay Rate Table.
2010-10-01
... reimbursement and Backfill, for the System Member's actual compensation or the actual compensation of the... OF HOMELAND SECURITY DISASTER ASSISTANCE NATIONAL URBAN SEARCH AND RESCUE RESPONSE SYSTEM General...
Entrainment and maximum vapour flow rate of trays
Van Sinderen, AH; Wijn, EF; Zanting, RWJ
This is a report on free entrainment measurements in a small (0.20 m x 0.20 in) air-water column. An adjustable weir controlled the liquid height on a test tray. Several sieve and valve trays were studied. The results were interpreted with a two- or three-layer model of the two-phase mixture on the
Nitric-glycolic flowsheet testing for maximum hydrogen generation rate
Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The Defense Waste Processing Facility (DWPF) at the Savannah River Site is developing for implementation a flowsheet with a new reductant to replace formic acid. Glycolic acid has been tested over the past several years and found to effectively replace the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the chemical generation of hydrogen and ammonia, allows purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allows for effective adjustment of the SRAT/SME rheology, and is favorable with respect to melter flammability. The objective of this work was to perform DWPF Chemical Process Cell (CPC) testing at conditions that would bound the catalytic hydrogen production for the nitric-glycolic flowsheet.
MAXIMUM PRODUCTION OF TRANSMISSION MESSAGES RATE FOR SERVICE DISCOVERY PROTOCOLS
Intisar Al-Mejibli
2011-12-01
Full Text Available Minimizing the number of dropped User Datagram Protocol (UDP messages in a network is regarded asa challenge by researchers. This issue represents serious problems for many protocols particularly thosethat depend on sending messages as part of their strategy, such us service discovery protocols.This paper proposes and evaluates an algorithm to predict the minimum period of time required betweentwo or more consecutive messages and suggests the minimum queue sizes for the routers, to manage thetraffic and minimise the number of dropped messages that has been caused by either congestion or queueoverflow or both together. The algorithm has been applied to the Universal Plug and Play (UPnPprotocol using ns2 simulator. It was tested when the routers were connected in two configurations; as acentralized and de centralized. The message length and bandwidth of the links among the routers weretaken in the consideration. The result shows Better improvement in number of dropped messages `amongthe routers.
veteran athletes exercise at higher maximum heart rates than are ...
questionnaire, a full medical examination and a routine. sECG. Thereafter ... activities than during stress testing in the laboratory. (P < 0.01). ... After the risks and procedures involved ..... for the first time in either rehabilitation or sporting activities. .... set were i. Results. E. 25 - 29.9), underwei increased. ;;, 24-year- pressure,.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Wollenberg, H A; Revzan, K L; Smith, A R
1994-01-01
We examined the applicability of radioelement data from the National Aerial Radiometric Reconnaissance, an element of the National Uranium Resource Evaluation, to estimate terrestrial gamma-ray absorbed dose rates, by comparing dose rates calculated from aeroradiometric surveys of uranium, thorium, and potassium concentrations with dose rates calculated from a radiogeologic data base and the distribution of lithologies in California. Gamma-ray dose rates increase generally from north to south following lithological trends, with low values of 25-30 nGy h-1 in the northernmost 1 x 2 degrees quadrangles between 41 and 42 degrees N to high values of 75-100 nGy h-1 in southeastern California. Lithologic-based estimates of mean dose rates in the quadrangles generally match those from aeroradiometric data, with statewide means of 63 and 60 nGy h-1, respectively. These are intermediate between a population-weighted global average of 51 nGy h-1 reported in 1982 by UNSCEAR and a weighted continental average of 70 nGy h-1, based on the global distribution of rock types. The concurrence of lithologically and aeroradiometrically determined dose rates in California, with its varied geology and topography encompassing settings representative of the continents, indicates that the National Aerial Radiometric Reconnaissance data are applicable to estimates of terrestrial absorbed dose rates from natural gamma emitters.
Zhenxie YI; Pu WANG; Hongbin TAO; Hongfang ZHANG; Lixia SHEN
2008-01-01
To reduce nitrogen fertilizer (NF) loss and improve nitrogen use efficiency (NUE) in summer maize, the effects of the different application rates of three types of NF (urea, coated urea and compound fertilizer) on the growth and development and NUE of summer maize (cultivars: Zhengdan958 and Nongda108) were studied in 2004. The main findings of this study were: (1) The yields of the two cultivars increased significantly with each The increase in the yield of summer maize treated with compound fertilizer was greater than the yield of those treated with either of the other two fertilizers at the same application rate, while the differences among the three types of NF were not significant. (2) Grain number per ear of the two cultivars rose in relation to the increase in N application rate, while its relationship with the type of NF was very weak. The type of NF had a greater impact on 1000-grain weight, and a difference between cultivars was observed. (3) Leaf area index (LAI), dry matter weight and leaf chlorophyll content grew in relation to the increase in N application rate, and were improved more sharply by compound fertilizer or coated urea than by urea alone. (4) Compared to the results achieved with urea, the NUEs of summer maize treated with coated urea and compound fertilizer were higher but the nitrogen harvest index was not improved. In addition, the NUEs of three types of NF exhibited a genotype difference from summer maize.
The use of toxic baits to kill adult Aedes albopictus (Skuse) mosquitoes is a safe and potentially effective alternative to the use of synthetic chemical insecticides. This study was made to identify effective application rates for boric acid-sugar solution baits sprayed onto plant surfaces and to ...
LIN Xian-qing; ZHU De-feng; CHEN Hui-zhe; ZHANG Yu-ping
2009-01-01
The nitrogen uptake, yield and its components for two super-high-yielding hybrid rice combinations, Guodao 6 and Eryou 7954 were investigated under different plant densities (15, 18, and 21 plants/m2) and different nitrogen application rates (120, 150, 180, and 210 kg/hm2). The experiment was conducted on loam soil during 2004-2006 at the experimental farm of the China National Rice Research Institute in Hangzhou, China. In these years, the two hybrid rice clearly showed higher yield at a plant density of 15 plants/m2 with a nitrogen application rate of 180 kg/hm2. Guodao 6 produced an average grain yield of 10 215.6 kg/hm2 across the three years, while the yield of Eryou 7954 was 9 633.0 kg/hm2. With fewer plants per unit-area and larger plants in the plots, the two hybrid rice produced more panicles per plant in three years. The highest nitrogen uptake of the two hybrid rice was at a plant density of 15 plants/m2 with a nitrogen application rate of 180 kg/hm2. Further increasing nitrogen application rate was not advantageous for nitrogen uptake in super-high-yielding rice under the same plant density.
'Enzyme Test Bench': A biochemical application of the multi-rate modeling
Rachinskiy, K.; Schultze, H.; Boy, M.; Büchs, J.
2008-11-01
: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature. The presented article gives a successful example for the application of multi-rate modeling, experimental design and parameter estimation within biochemical engineering. At the same time, it shows the limitations of the methods at the state of the art and addresses the current problems to the applied mathematics community.
'Enzyme Test Bench': A biochemical application of the multi-rate modeling
Rachinskiy, K; Buechs, J [Department of Biochemical Engineering, Sammelbau Biologie, RWTH-Aachen University, D-52074 Aachen (Germany); Schultze, H; Boy, M [BASF Aktiengesellschaft, Ludwigshafen (Germany)], E-mail: buechs@biovt.rwth-aachen.de
2008-11-01
technique allows for both: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature. The presented article gives a successful example for the application of multi-rate modeling, experimental design and parameter estimation within biochemical engineering. At the same time, it shows the limitations of the methods at the state of the art and addresses the current problems to the applied mathematics community.
Duley, Aaron R; Janelle, Christopher M; Coombes, Stephen A
2004-11-01
The cardiovascular system has been extensively measured in a variety of research and clinical domains. Despite technological and methodological advances in cardiovascular science, the analysis and evaluation of phasic changes in heart rate persists as a way to assess numerous psychological concomitants. Some researchers, however, have pointed to constraints on data analysis when evaluating cardiac activity indexed by heart rate or heart period. Thus, an off-line application toolkit for heart rate analysis is presented. The program, written with National Instruments' LabVIEW, incorporates a variety of tools for off-line extraction and analysis of heart rate data. Current methods and issues concerning heart rate analysis are highlighted, and how the toolkit provides a flexible environment to ameliorate common problems that typically lead to trial rejection is discussed. Source code for this program may be downloaded from the Psychonomic Society Web archive at www.psychonomic.org/archive/.
38 CFR 61.13 - Rating criteria for capital grant applications.
2010-07-01
... rehabilitation or construction of housing; (5) If applicable, administering a rental assistance program; (6... effect(s); and (3) Establishing usefulness as a model for other projects. (g) Leveraging. VA will award... the project at the time of application. (h) Cost-effectiveness. VA will award up to 100 points...
A simulation of the economic and environmental impact of variable rate nitrogen application
Pedersen, Søren Marcus; Pedersen, Jørgen Lindgaard
2003-01-01
on the field. However, findings from this also indicate that changing climatic conditions have a significant impact on yield response from variable nitrogen application. The study also establishes that site-specific N-application seems to have a positive but also small impact on nitrate leaching in cereals....
Rong Wang
2016-01-01
Full Text Available This paper investigated the effect of the application of sewage sludge on the growth rates and absorption rates of Pb and As in potted water spinach. Our results indicated that application of sewage sludge promoted vegetable growth, and the dry weight of water spinach reached a maximal value (4.38 ± 0.82 g upon 8% sludge application. We also found that the dry weights of water spinach after treatment were all greater than those of the control systems (CK. Treatment with sludge promoted the absorption of Pb and As in water spinach, with a significant (p < 0.05 increase of absorbed Pb following treatment concentrations above 10%, and a peak absorption of As at 8%. Finally, we found that concentrations of Pb and As were higher in rhizosphere-attached soil than in free pot.
刘祥臣; 乔利; 唐元才; 刘春增; 卢兆成; 李本银; 丰大清; 赵海英
2012-01-01
【Objective】 Exploration was done on the effect on rice growth and yield of the different N disposable application rates under the condition of film mulching cultivation.【Method】 High quality rice ＂Liang-you 6326＂ was used as plant material.Five different designs：0（CK）,75,150,225,300 kg/hm2 of the Nitrogen Fertilizer were applied before film mulching.The tiller number was counted once every week after transplanting.Then the effect photo out,chlorophyll content and yield were measured one week before initial heading stage,full heading time and harvest time of the rice.The best N application rate was analyzed by economic benefit of curve fitting.【Result】 In the experiment,with the increase of the N application rate,the physiological indexs of photo out,chlorophyll content and the tillers all increased.But the yield showed a decreasing tendency when the yield increased to a certain degree.N application had no significant effect on photosynthetic parameters,such as：cond out,ci out,trmmol out and leaf except the photo out.When the Maximum N application rate was 300 kg/hm2,the chlorophyll content was 48.38 mg/g,the highest at initial heading stage,and the tillers per hill reached 44.42 ind.But there was no obvious difference with the nitrogen application 225 kg/hm2,the effective panicle number reached 225.60×104 plants/hm2,and yield reached 10 062.57 kg/hm2.【Conclusion】 Through the nonlinear curve fit,the results showed that：285.37 kg/hm2 was the best N disposable application rate under the condition of film mulching,and 10 062.57 kg/hm2 was the highest theoretical yield.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
S. Fatih Ergin
2016-10-01
Full Text Available In this study, effect of mycorrhiza on growth criteria and phosphorus nutrition of lettuce (Lactuca sativa L. under different phosphorus fertilization rates were investigated. Phosphorus were added into growing media as 0, 50, 100 and 200 mg P2O5/kg with and without mycorrhiza applications. Phosphorus applications significantly increased yield criteria of lettuce according to the control treatment statistically. Mycorrhiza application also significantly increased plant diameter, plant dry weight and phosphor uptake by plant. The highest phosphorus uptakes by plants were determined in 200 mg P2O5/kg treatments as 88.8 mg P/pot with mycorrhiza and 83.1 mg P/pot without mycorrhiza application. In the control at 0 doses of phosphorus with mycorrhiza treatment, phosphorus uptake (69.9 mg P/pot, edible weight (84.36 g, dry weight (8.64 g and leaf number (28 of lettuce were higher than that (47.7 mg P/pot, 59.33 g, 6.75 g and 20, respectively in the control without mycorrhiza application. It was determined that mycorrhiza had positive effect on growth criteria and phosphorus nutrition by lettuce plant, and this effect decreased at higher phosphorus application rates.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Nielsen, Thor Pajhede
2017-01-01
. (2016) in examining the conditional independence hypothesis of Lando and Nielsen (2010). Empirically we find that; (1) the current default rate influence the default rate of the following periods even when conditioning on explanatory variables. (2) The 12 month lag is highly significant in explaining...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
Yu Hsing
2009-12-01
Full Text Available Extending the open-economy loanable funds model, this paper finds that more government deficit as a percentage of GDP does not lead to a higher government bond yield. In addition, a higher real Treasury bill rate, a higher expected inflation rate, a higher EU government bond yield, or an expected depreciation of the euro against the U.S. dollar would increase Slovenia’s long-term interest rate. The negative coefficient of the percentage change in real GDP is insignificant at the10% level. Applying the standard closed-economy or open-economy loanable funds model without including the world interest rate and the expected exchange rate, we find similar conclusions except that the positive coefficient of the ratio of the net capital inflow to GDP has a wrong sign and is insignificant at the 10% level.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Nitrogen (N) losses associated with fertilizer application have negative economic and environmental consequences, but urease and nitrification inhibitors have potential to reduce N losses. The effectiveness of these inhibitors has been studied extensively in irrigated but not rainfed systems. Theref...
Sun, Jindong; Feng, Zhaozhong; Leakey, Andrew D B; Zhu, Xinguang; Bernacchi, Carl J; Ort, Donald R
2014-09-01
The responses of CO2 assimilation to [CO2] (A/Ci) were investigated at two developmental stages (R5 and R6) and in several soybean cultivars grown under two levels of CO2, the ambient level of 370 μbar versus the elevated level of 550 μbar. The A/Ci data were analyzed and compared by either the combined iterations or the separated iterations of the Rubisco-limited photosynthesis (Ac) and/or the RuBP-limited photosynthesis (Aj) using various curve-fitting methods: the linear 2-segment model; the non-rectangular hyperbola model; the rectangular hyperbola model; the constant rate of electron transport (J) method and the variable J method. Inconsistency was found among the various methods for the estimation of the maximum rate of carboxylation (Vcmax), the mitochondrial respiration rate in the light (Rd) and mesophyll conductance (gm). The analysis showed that the inconsistency was due to inconsistent estimates of gm values that decreased with an instantaneous increase in [CO2], and varied with the transition Ci cut-off between Rubisco-limited photosynthesis and RuBP-regeneration-limited photosynthesis, and due to over-parameters for non-linear curve-fitting with gm included. We proposed an alternate solution to A/Ci curve-fitting for estimates of Vcmax, Rd, Jmax and gm with the various A/Ci curve-fitting methods. The study indicated that down-regulation of photosynthetic capacity by elevated [CO2] and leaf aging was due to partially the decrease in the maximum rates of carboxylation and partially the decrease in gm. Mesophyll conductance lowered photosynthetic capacity by 18% on average for the case of soybean plants.
Naghdi Yazdan
2015-08-01
Full Text Available Given the recent fluctuation in the exchange rate and the presence of several factors such as the various economy-political sanctions (mainly embargos on oil and banking, extreme volatility in different economic fields, and consequently the devaluation of national and public procurement -A landmark that is emanating from exchange rate fluctuation - two points should be noted: First, it is essential to review the effect of exchange rate fluctuation on macro economic variables such as inflation and to provide appropriate policies. Second, the existence of this condition provides the chance to study the relation between exchange rate and inflation in a non-linear and asymmetric method. Hence, the present study seeks to use TAR model and, on the basis of monthly time series data over the period March 2002 to March 2014, to analyze the cross-asymmetric and non-linear exchange rate on consumer price index (CPI in Iran. The results also show the presence of an asymmetric long-term relationship between these variables (exchange rate and CPI. Also, in the Iranian economy, the effect of negative shocks of exchange rate on inflation is more sustainable than the one from positive shocks.
Yan-jie Ni
2016-04-01
Full Text Available A 30 mm electrothermal-chemical (ETC gun experimental system is employed to research the burning rate characteristics of 4/7 high-nitrogen solid propellant. Enhanced gas generation rates (EGGR of propellants during and after electrical discharges are verified in the experiments. A modified 0D internal ballistic model is established to simulate the ETC launch. According to the measured pressure and electrical parameters, a transient burning rate law including the influence of EGGR coefficient by electric power and pressure gradient (dp/dt is added into the model. The EGGR coefficient of 4/7 high-nitrogen solid propellant is equal to 0.005 MW−1. Both simulated breech pressure and projectile muzzle velocity accord with the experimental results well. Compared with Woodley's modified burning rate law, the breech pressure curves acquired by the transient burning rate law are more consistent with test results. Based on the parameters calculated in the model, the relationship among propellant burning rate, pressure gradient (dp/dt and electric power is analyzed. Depending on the transient burning rate law and experimental data, the burning of solid propellant under the condition of plasma is described more accurately.
Yan-jie NI; Yong JIN; Gang WAN; Chun-xia YANG; Hai-yuan LI; Bao-ming LI
2016-01-01
A 30 mm electrothermal-chemical (ETC) gun experimental system is employed to research the burning rate characteristics of 4/7 high-nitrogen solid propellant. Enhanced gas generation rates (EGGR) of propellants during and after electrical discharges are verified in the experiments. A modified 0D internal ballistic model is established to simulate the ETC launch. According to the measured pressure and electrical parameters, a transient burning rate law including the influence of EGGR coefficient by electric power and pressure gradient (dp/dt) is added into the model. The EGGR coefficient of 4/7 high-nitrogen solid propellant is equal to 0.005 MW−1. Both simulated breech pressure and projectile muzzle velocity accord with the experimental results well. Compared with Woodley’s modified burning rate law, the breech pressure curves acquired by the transient burning rate law are more consistent with test results. Based on the parameters calculated in the model, the relationship among propellant burning rate, pressure gradient (dp/dt) and electric power is analyzed. Depending on the transient burning rate law and experimental data, the burning of solid propellant under the condition of plasma is described more accurately.
Full-Field Spectroscopy at Megahertz-frame-rates: Application of Coherent Time-Stretch Transform
DeVore, Peter Thomas Setsuda
Outliers or rogue events are found extensively in our world and have incredible effects. Also called rare events, they arise in the distribution of wealth (e.g., Pareto index), finance, network traffic, ocean waves, and e-commerce (selling less of more). Interest in rare optical events exploded after the sighting of optical rogue waves in laboratory experiments at UCLA. Detecting such tail events in fast streams of information necessitates real-time measurements. The Coherent Time-Stretch Transform chirps a pulsed source of radiation so that its temporal envelope matches its spectral profile (analogous to the far field regime of spatial diffraction), and the mapped spectral electric field is slow enough to be captured by a real-time digitizer. Combining this technique with spectral encoding, the time stretch technique has enabled a new class of ultra-high performance spectrometers and cameras (30+ MHz), and analog-to-digital converters that have led to the discovery of optical rogue waves and detection of cancer cells in blood with one in a million sensitivity. Conventionally, the Coherent Time-Stretch Transform maps the spectrum into the temporal electric field, but the time-dilation process along with inherent fiber losses results in reduction of peak power and loss of sensitivity, a problem exacerbated by extremely narrow molecular linewidths. The loss issue notwithstanding, in many cases the requisite dispersive optical device is not available. By extending the Coherent Time-Stretch Transform to the temporal near field, I have demonstrated, for the first time, phase-sensitive absorption spectroscopy of a gaseous sample at millions of frames per second. As the Coherent Time-Stretch Transform may capture both near and far field optical waves, it is a complete spectro-temporal optical characterization tool. This is manifested as an amplitude-dependent chirp, which implies the ability to measure the complex refractive index dispersion at megahertz frame rates. This
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Hernández S, A., E-mail: h.s.alfonso@gmail.com, E-mail: meduardo2001@hotmail.com; Cano, M. E., E-mail: h.s.alfonso@gmail.com, E-mail: meduardo2001@hotmail.com [Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, Jalisco (Mexico); Torres-Arenas, J., E-mail: torresare@gmail.com [Division de Ciencias e Ingenierías, Universidad de Guanajuato, León, Guanajuato (Mexico)
2014-11-07
Currently the absorption of electromagnetic radiation by magnetic nanoparticles is studied for biomedical applications of cancer thermotherapy. Several experiments are conduced following the framework of the Rosensweig model, in order to estimate their specific absorption rate. Nevertheless, this linear approximation involves strong simplifications which constrain their accuracy and validity range. The main aim of this work is to incorporate the deviation of the sphericity assumption in particles shapes, to improve the determination of their specific absorption rate. The correction to the effective particles volume is computed as a measure of the apparent amount of magnetic material, interacting with the external AC magnetic field. Preliminary results using the physical properties of Fe3O4 nanoparticles, exhibit an important correction in their estimated specific absorption rate, as a function of the apparent mean particles radius. Indeed, we have observed using a small deviation (6% of the apparent radius), up to 40% of the predicted specific absorption rate by the Rosensweig linear approximation.
Ballarini, Ilaria; Corrado, Vincenzo [Dipartimento di Energetica, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)
2009-07-15
The objective of this work is to contribute to the recent standardisation activity, finalized to apply the Energy Performance of Buildings Directive (EPBD). Through the energy assessment of some residential buildings in Turin (Italy), the work investigates the application of the calculation methods that have been specified in the recent European standard for the so-called ''standard energy rating''. A comparison of the ''calculated energy rating'' with the ''measured energy rating'' is used to investigate the effect of user behaviour and weather conditions. Moreover, in order to draft the energy certificate and make an appropriate classification, the last part of the work investigates the way to find energy reference values of the building stock, through the study of the correlation between the input and the output data of an energy rating and the comparison of the analysed buildings. (author)
GU Bo-hong; PAN Xiong-qi
2002-01-01
Rate-dependent property of material is very important in analysis of ballistic impact. The tensile property of Twaron(R) filaments at strain rate range from 0.01/s to1 000/s was obtained by MTS materials testing and split Hopkinson tension bar. Rate sensitivity of Twaron(R) filaments is discussed. Application of high strain rate property to ballistic perforation of multi- layered fabrics conforms to the actual situation than that of quasi-static property. The revised analytical model can be used to calculate the process of ballistic penetration and perforation on soft armour, such as fabric target plate,at intuitive approach and simple algorithm with a little computer process time. Predictions of the residual velocities and energy absorbed by the multi- layered fabric show good agreement with experimental data.
120 MB/S and 240 MB/S bit synchronizer-signal conditioners for NASA high data rate applications
Gray, J. S.
1976-01-01
Two bit synchronizer-signal conditioners (BSSC) developed for NASA high data rate applications such as earth resources monitoring are described. One BSSC is centered at 120 MB/s and the other at 240 Mb/s. These subsystems are featured out of the total hardware developed because the BSSC is such a key subsystem in determining overall system statistical performance. These units represent an evolution of high data rate BSSC's available at low data rates. Numerous inputs/outputs, control functions, indicators, plus the ability to minimize the effects of various signal perturbations are provided. Examples of allowed perturbations are input level variations, bit rate variance static and dynamic, baseline, transition density, bandlimiting, etc., as well as noise. Emphasis in the past has been primarily concerned only with noise.
Application of new genetic approaches to obtain population vital rate parameters in leatherbacks
National Oceanic and Atmospheric Administration, Department of Commerce — This project addresses major gaps in knowledge on vital rates such as age to maturity, survival, sex ratios, and population size (including the males)whcih have made...
Searching threshold effects in the interest rate: An application to Turkey case
Yavuz, Nilgun Cil; Guris, Burak; Yilanci, Veli
2007-06-01
This paper investigates the behaviour of interest rates in Turkey using a two-regime TAR model with an autoregressive unit root. This method recently developed by Caner and Hansen [Threshold autoregression with a unit roots, Econometrica 69 (6) (2001) 1555-1596] allows to simultaneously consider non-stationarity and non-linearity. Our finding indicates that the interest rate is a non-linear series and is characterized by a unit root process over the period 1990:1-2006:5.
Effects of the application of targeting the exchange rate policy in Macedonia
KRUME NIKOLOSKI; SANJA PANOVA; M-R.VLATKO PACESKOSKI
2016-01-01
The monetary system and monetary – credit policy in the Republic of Macedonia were built after the country gained independence from the previous federal community, when Macedonia faced problems such as: termination of many plants, increase in unemployment, increase in budget and foreign trade deficit as well as high inflation rate. The macroeconomic stability narrowly understood as reducing the inflation rate, was the first measure of the economic policy, undertaken along with the monetary in...
EFFECTS OF THE APPLICATION OF TARGETING THE EXCHANGE RATE POLICY IN MACEDONIA
KRUME NIKOLOSKI
2016-02-01
Full Text Available The monetary system and monetary – credit policy in the Republic of Macedonia were built after the country gained independence from the previous federal community, when Macedonia faced problems such as: termination of many plants, increase in unemployment, increase in budget and foreign trade deficit as well as high inflation rate. The macroeconomic stability narrowly understood as reducing the inflation rate, was the first measure of the economic policy, undertaken along with the monetary independence of Macedonia. In a small and open economy, the exchange rate policy has particular importance in the control of the inflation rate and beyond: in the real economic trends. The strategy of targeting the denar exchange rate was accepted and applied with the expectation that it would act in that direction, hence the monetary policy was focused on maintaining fixed exchange rate against the euro. The determination of the country to join the European Union and to become a member of other international financial organizations is yet another reason for choosing this strategy.
Neutron dose rate for {sup 252} Cf AT source in medical applications
Paredes, L.; Balcazar, M. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico); Azorin, J. [UAM-I, 09340 Mexico D.F. (Mexico); Francois, J.L. [FI-UNAM, 04510 Mexico D.F. (Mexico)
2006-07-01
The AAPM TG-43 modified protocol was used for the calculation of the neutron dose rate of {sup 252}Cf sources for two tissue substitute materials, five normal tissues and six tumours. The {sup 252}Cf AT source model was simulated using the Monte Carlo MCNPX code in spherical geometry for the following factors: a) neutron air kerma strength conversion factor, b) dose rate constant, c) radial dose function, d) geometry factor, e) anisotropy function and f) neutron dose rate. The calculated dose rate in water at 1 cm and 90 degrees from the source long axis, using the Watt fission spectrum, was D{sub n}(r{sub 0}, {theta}{sub 0})= 1.9160 cGy/h-{mu}g. When this value is compared with Rivard et al. calculation using MCNP4B code, 1.8730 cGy/h-{mu}g, a difference of 2.30% is obtained. The results for the reference neutron dose rate in other media show how small variations in the elemental composition between the tissues and malignant tumours, produce variations in the neutron dose rate up to 12.25%. (Author)
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Application of long-range order to predict unfolding rates of two-state proteins.
Harihar, B; Selvaraj, S
2011-03-01
Predicting the experimental unfolding rates of two-state proteins and models describing the unfolding rates of these proteins is quite limited because of the complexity present in the unfolding mechanism and the lack of experimental unfolding data compared with folding data. In this work, 25 two-state proteins characterized by Maxwell et al. (Protein Sci 2005;14:602–616) using a consensus set of experimental conditions were taken, and the parameter long-range order (LRO) derived from their three-dimensional structures were related with their experimental unfolding rates ln(k(u)). From the total data set of 30 proteins used by Maxwell et al. (Protein Sci 2005;14:602–616), five slow-unfolding proteins with very low unfolding rates were considered to be outliers and were not included in our data set. Except all beta structural class, LRO of both the all-alpha and mixed-class proteins showed a strong inverse correlation of r = -0.99 and -0.88, respectively, with experimental ln(k(u)). LRO shows a correlation of -0.62 with experimental ln(k(u)) for all-beta proteins. For predicting the unfolding rates, a simple statistical method has been used and linear regression equations were developed for individual structural classes of proteins using LRO, and the results obtained showed a better agreement with experimental results. Copyright © 2010 Wiley-Liss, Inc.
Modelling nonlinear behavior of labor force participation rate by STAR: An application for Turkey
Sibel Cengiz
2014-04-01
Full Text Available The aim of this paper is to contribute to the understanding of the behavior of participation rates in terms of gender differences. We employed smooth autoregressive transition models for the quarterly Turkish labor force participation rates (LFPR data between 2000: Q1 - 2011: Q4 to present an asymmetric participation behavior. The smoothness parameter indicates a gradual transition from low to high regimes. It is higher for female workers compared to the male workers. Participation rates diminish during a recession but they increase smoothly during the periods of expansion. The estimation results of Enders et al. (1998 also verified the asymmetry and nonlinearity in participation rates. During periods of economic expansion, they are higher than the threshold but the low regime indicator function takes the value zero. The results of the paper have economic implications for policy makers. Due to the discouraged worker and added worker effects, LFPR should be observed with the unemployment rates while evaluating the tightness of the labor market.
Stein, P D; Sabbath, H N
1975-02-01
The ratio of the instantaneous isovolumic rate of change of power, normalized to instantaneous isovolumic power, appears to be an expression of physiologic and practical significance. This ratio, termed the isovolumic fractional rate of change of power, describes the capability of the ventricle to sustain, during isovolumic contraction, an acceleration of energy production relative to instantaneous rates of energy production. The expression is independent of assumptions of ventricular geometry, fiber orientation, symmetry of contraction or elasticity of muscle fibers. It was derived upon the basis of established principles of fluid dynamics. The expression serves in an integrative fashion by demonstrating a simple relation between characteristics of performance derived on the basis of fluid dynamics and those derived on the basis of muscle mechanics. In this study, the isovolumic fractional rate of change of power permitted distinction between patients with normal and abnormal ventricular performance (as characterized by the ejection fraction, mean velocity of circumferential fiber shortening and end-diastolic volume index) (P less than 0.01). The firm theoretical basis of the isovolumic fractional rate of change of power, and its demonstrated capability to permit identification of patients with normal or abnormal left ventricular performance, recommends it as a meaningful and useful hemodynamic expression.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Mostafa Heidari
2012-06-01
Full Text Available Cucurbitaceae is one of the largest families in vegetable kingdom consisting of largest number of edible type species. Momordica charantia is one such important vegetable that belongs to the family of Cucurbitaceae. In order to evaluate the effect of rate and time of nitrogen application on M. charantia, a field experiment was conducted at the University of Zabol in Iran during 2011 growing season. The experiment was laid out as split plot based on randomized complete block design with three replications. Three levels of nitrogen rates consisting of: N1 = 75, N2 = 150 and N3 = 225 kg N ha−1 as main plot and three time application including: T1 = 1/2 at 3 and 4 leaves and 1/2 before flowering, T2 = 1/2 at 3 and 4 leaves and 1/2 after fruit to start, and T3 = 1/3 at 3 and 4 leaves, 1/3 before flowering, and 1/3 after fruit to start were used as sub plot. The results revealed that both rate and time of nitrogen application had a significant effect on fruit yield. The highest fruit yield was recorded at the rate of N3 and time of nitrogen application in T3 treatment. In this study, by increasing nitrogen levels from 75 to 225 kg N ha−1, the values of nitrogen, phosphorus and potassium content in fruit increased. The time of nitrogen application and interaction between rate and time of nitrogen treatments had no significant effect on the amounts of these three elements. Nitrogen level had a significant effect on the amounts of calcium, manganese and zinc elements. The highest values of calcium and zinc were obtained at N2 and manganese at N3 nitrogen level. Time of nitrogen application treatment in this experiment had only significant effect on the amounts of calcium and zinc elements and had no significant effect on the other elements.
Krafty, Robert T; Hall, Martica
2013-03-01
Although many studies collect biomedical time series signals from multiple subjects, there is a dearth of models and methods for assessing the association between frequency domain properties of time series and other study outcomes. This article introduces the random Cramér representation as a joint model for collections of time series and static outcomes where power spectra are random functions that are correlated with the outcomes. A canonical correlation analysis between cepstral coefficients and static outcomes is developed to provide a flexible yet interpretable measure of association. Estimates of the canonical correlations and weight functions are obtained from a canonical correlation analysis between the static outcomes and maximum Whittle likelihood estimates of truncated cepstral coefficients. The proposed methodology is used to analyze the association between the spectrum of heart rate variability and measures of sleep duration and fragmentation in a study of older adults who serve as the primary caregiver for their ill spouse.
Influence of nitrogen rate and drip application method on pomegranate fruit yield and quality
Currently, 98% of domestic commercial pomegranate fruit (Punica granatum L.) are produced in California on over 13,000 ha. Developing more efficient methods of water and fertilizer application are important in reducing production costs. In 2012, a pomegranate orchard established in 2010 with a den...
A simulation of the economic and environmental impact of variable rate nitrogen application
Pedersen, Søren Marcus; Pedersen, Jørgen Lindgaard
2003-01-01
This analysis shows that there is some potential yield benefit from site-specific application of nitrogen in winter wheat, barley and rape seed based on static knowledge about the soil structure and soil conductivity. A presumption is that some spatial variability of the soil structure occurs on ...
Bardsley, Nicholas; Büchs, Milena; Schnepf, Sylke V
2017-01-01
Consumption surveys often record zero purchases of a good because of a short observation window. Measures of distribution are then precluded and only mean consumption rates can be inferred. We show that Propensity Score Matching can be applied to recover the distribution of consumption rates. We demonstrate the method using the UK National Travel Survey, in which c.40% of motorist households purchase no fuel. Estimated consumption rates are plausible judging by households' annual mileages, and highly skewed. We apply the same approach to estimate CO2 emissions and outcomes of a carbon cap or tax. Reliance on means apparently distorts analysis of such policies because of skewness of the underlying distributions. The regressiveness of a simple tax or cap is overstated, and redistributive features of a revenue-neutral policy are understated.
Ivanov, Mikhail V; Babikov, Dmitri
2012-05-14
Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.
Deduction of plastic work rate per unit volume for unified yield criterion and its application
ZHAO De-wen; LI Jing; LIU Xiang-hua; WANG Guo-dong
2009-01-01
A unified linear expression of plastic work rate per unit volume is deduced from the unified linear yield criterion and the associated flow rule. The expression is suitable for various linear yield loci in the error triangle between Tresca's and twin shear stress yield loci on the π-plane. It exhibits generalization in which the different value of criterion parameter b corresponds to a specific linear formula of plastic work rate per unit volume. Finally, with the unified linear expression of plastic work rate and upper-bound parallel velocity field the strip forging without bulge is successfully analyzed and an analytical result is also obtained. The comparison with traditional solutions shows that when b=1/(1+(√3)) the result is the same as the upper bound result by Mises' yield criterion, and it also is identical to that by slab method with m=1, σ0=0.
The application of a linear algebra to the analysis of mutation rates.
Jones, M E; Thomas, S M; Clarke, K
1999-07-07
Cells and bacteria growing in culture are subject to mutation, and as this mutation is the ultimate substrate for selection and evolution, the factors controlling the mutation rate are of some interest. The mutational event is not observed directly, but is inferred from the phenotype of the original mutant or of its descendants; the rate of mutation is inferred from the number of such mutant phenotypes. Such inference presumes a knowledge of the probability distribution for the size of a clone arising from a single mutation. We develop a mathematical formulation that assists in the design and analysis of experiments which investigate mutation rates and mutant clone size distribution, and we use it to analyse data for which the classical Luria-Delbrück clone-size distribution must be rejected.
Allison, Thomas C
2016-03-03
Rate constants for reactions of chemical compounds with hydroxyl radical are a key quantity used in evaluating the global warming potential of a substance. Experimental determination of these rate constants is essential, but it can also be difficult and time-consuming to produce. High-level quantum chemistry predictions of the rate constant can suffer from the same issues. Therefore, it is valuable to devise estimation schemes that can give reasonable results on a variety of chemical compounds. In this article, the construction and training of an artificial neural network (ANN) for the prediction of rate constants at 298 K for reactions of hydroxyl radical with a diverse set of molecules is described. Input to the ANN consists of counts of the chemical bonds and bends present in the target molecule. The ANN is trained using 792 (•)OH reaction rate constants taken from the NIST Chemical Kinetics Database. The mean unsigned percent error (MUPE) for the training set is 12%, and the MUPE of the testing set is 51%. It is shown that the present methodology yields rate constants of reasonable accuracy for a diverse set of inputs. The results are compared to high-quality literature values and to another estimation scheme. This ANN methodology is expected to be of use in a wide range of applications for which (•)OH reaction rate constants are required. The model uses only information that can be gathered from a 2D representation of the molecule, making the present approach particularly appealing, especially for screening applications.
Derivation of Plastic Work Rate Done per Unit Volume for Mean Yield Criterion and Its Application
Dewen ZHAO; Yingjie XIE; Xiaowen WANG; Xianghua LIU
2005-01-01
In Haigh Westergaard stress space linear combination of twin shear stress and Tresca yield functions is called the mean yield (MY) criterion. The mathematical relationship of the criterion and its plastic work rate done per unit volume were derived. A generalized worked example of slab forging was analyzed by the criterion and its corresponding plastic work rate done per unit volume. Then, the precision of the solution was compared with those by Mises and Twin shear stress yield criterions, respectively. It turned out that the calculated results by MY criterion were in good agreement with those by Mises criterion.
Desh Deepak
1998-04-01
Full Text Available The ultrasonic pulse-echo technique has been applied for the measurement of instantaneous burnrate of aluminised composite solid propellants. The tests have been carried out on end-burning 30 mmthick propellant specimens at nearly constant pressure of about 1.9 MPa. Necessary software forpost-test data processing and instantaneous burn rate computations have been developed. The burnrates measured by the ultrasonic technique have been compared with those obtained from ballisticevaluation motor tests on propellant from the same mix. An accuracy of about +- 1 per cent ininstantaneous burn rate measurements and reproducibility of results have been demonstrated byapplying ultrasonic technique.
Ling, Tsz Yan
Nanoparticles are often found in liquid-borne dispersed phases, in addition to the airborne and surface-borne phases. Characterization techniques for nanoparticles are needed for the environmental, health and safety studies of nanomaterials. The objectives of this thesis are to 1) explore methods for characterizing liquid-borne nanoparticles and 2) apply these methods to study nanoparticle filtration problems. In Chapter 2, calibration results of the Nanoparticle Tracking Analysis (NTA) technique in our lab are reported. The concentration measurements agree well with that estimated by suspension mass concentration within the range of 108-1010 particles/ml. The particles generally have a most probable size of 100-200 nm. The filtration systems of the AWM and EDM processes were found to remove of 70 and 90 % the nanoparticles present, respectively. However, the particle concentration of the filtered water from the AWM was still four times higher than that found in regular tap water. These nanoparticles are mostly agglomerated, according to the microscopy analysis. Since AWM and EDM are widely used, the handling and disposal of used filters collected with nanoparticles, release of nanoparticles to the sewer and potential use of higher performance filters for these processes will deserve further considerations. The development of an aerosolization technique to measure liquid-borne nanoparticles down to 30 nm and its application to filter evaluation is discussed in Chapter 3. This technique involves dispersing nanoparticle suspensions into airborne form with an atomizer or electrospray aerosol generator, and measuring the size and concentration by a differential mobility analyzer coupled to a condensation particle counter. With the electrospray aerosol generator, residue particles can be controlled to be less than 10 nm, allowing particles as small as 30 nm to be clearly distinguished from the size distribution measurements. Comparing to NTA, the aerosolization
Reconstruction of disease transmission rates: Applications to measles, dengue, and influenza.
Lange, Alexander
2016-07-07
Transmission rates are key in understanding the spread of infectious diseases. Using the framework of compartmental models, we introduce a simple method to reconstruct time series of transmission rates directly from incidence or disease-related mortality data. The reconstruction employs differential equations, which model the time evolution of infective stages and strains. Being sensitive to initial values, the method produces asymptotically correct solutions. The computations are fast, with time complexity being quadratic. We apply the reconstruction to data of measles (England and Wales, 1948-1967), dengue (Thailand, 1982-1999), and influenza (U.S., 1910-1927). The Measles example offers comparison with earlier work. Here we re-investigate reporting corrections, include and exclude demographic information. The dengue example deals with the failure of vector-control measures in reducing dengue hemorrhagic fever (DHF) in Thailand. Two competing mechanisms have been held responsible: strain interaction and demographic transitions. Our reconstruction reveals that both explanations are possible, showing that the increase in DHF cases is consistent with decreasing transmission rates resulting from reduced vector counts. The flu example focuses on the 1918/1919 pandemic, examining the transmission rate evolution for an invading strain. Our analysis indicates that the pandemic strain could have circulated in the population for many months before the pandemic was initiated by an event of highly increased transmission.
Alster, Charlotte J.; Koyama, Akihiro; Johnson, Nels G.; Wallenstein, Matthew D.; Fischer, Joseph C.
2016-06-01
There is compelling evidence that microbial communities vary widely in their temperature sensitivity and may adapt to warming through time. To date, this sensitivity has been largely characterized using a range of models relying on versions of the Arrhenius equation, which predicts an exponential increase in reaction rate with temperature. However, there is growing evidence from laboratory and field studies that observe nonmonotonic responses of reaction rates to variation in temperature, indicating that Arrhenius is not an appropriate model for quantitatively characterizing temperature sensitivity. Recently, Hobbs et al. (2013) developed macromolecular rate theory (MMRT), which incorporates thermodynamic temperature optima as arising from heat capacity differences between isoenzymes. We applied MMRT to measurements of respiration from soils incubated at different temperatures. These soils were collected from three grassland sites across the U.S. Great Plains and reciprocally transplanted, allowing us to isolate the effects of microbial community type from edaphic factors. We found that microbial community type explained roughly 30% of the variation in the CO2 production rate from the labile C pool but that temperature and soil type were most important in explaining variation in labile and recalcitrant C pool size. For six out of the nine soil × inoculum combinations, MMRT was superior to Arrhenius. The MMRT analysis revealed that microbial communities have distinct heat capacity values and temperature sensitivities sometimes independent of soil type. These results challenge the current paradigm for modeling temperature sensitivity of soil C pools and understanding of microbial enzyme dynamics.
Application of fuzzy theory on earthquake damage rate estimation of buildings
邵扬威; 吴玉祥; 高士峰; 黄麒然; 张宽勇
2014-01-01
Variations between earthquakes result in many factors that influence post-earthquake building damage (e.g., ground motion parameters, building structure, site information, and quality of construction). Consequently, it is necessary to develop an appropriate building damage-rate estimation model. The building damage survey data were recorded and constructed into files by the Architecture and Building Research Institute (ABRI), Taiwan for the 1999 Chi-Chi earthquake in the Nantou region as a basis for developing a building damage rate estimation model by applying fuzzy theory to express the fragility curves of buildings as a membership function. Empirical verification was performed using post-earthquake building damage data in the Taichung city that suffered relatively severe damage. Results indicate that fuzzy theory can be applied to predict building damage rates and that the estimated results are similar to actual disaster figures. Prediction of disaster damage using building damage rates can provide a reference for immediate disaster response during earthquakes and for regular disaster prevention and rescue planning.
Christodoulou, C.A.; Fotis, G.P.; Gonos, I.F.; Stathopulos, I.A. [National Technical University of Athens, School of Electrical and Computer Engineering, High Voltage Laboratory, 9 Iroon Politechniou St., Zografou Campus, 157 80 Athens (Greece); Ekonomou, L. [A.S.PE.T.E. - School of Pedagogical and Technological Education, Department of Electrical Engineering Educators, N. Heraklion, 141 21 Athens (Greece)
2010-02-15
The use of transmission line surge arresters to improve the lightning performance of transmission lines is becoming more common. Especially in areas with high soil resistivity and ground flash density, surge arresters constitute the most effective protection mean. In this paper a methodology for assessing the surge arrester failure rate based on the electrogeometrical model is presented. Critical currents that exceed arresters rated energy stress were estimated by the use of a simulation tool. The methodology is applied on operating Hellenic transmission lines of 150 kV. Several case studies are analyzed by installing surge arresters on different intervals, in relation to the region's tower footing resistance and the ground flash density. The obtained results are compared with real records of outage rate showing the effectiveness of the surge arresters in the reduction of the recorded failure rate. The presented methodology can be proved valuable to the studies of electric power systems designers intending in a more effective lightning protection, reducing the operational costs and providing continuity of service. (author)
Development of a Rating Form to Evaluate Grant Applications to the Hogg Foundation for Mental Health
Whaley, Arthur L.; Rodriguez, Reymundo; Alexander, Laurel A.
2006-01-01
Reliance on subjective grant proposal review methods leads private philanthropies to underfund mental health programs, even when foundations have mental health focuses. This article describes a private mental health foundation's efforts to increase the objectivity of its proposal review process by developing a reliable, valid proposal rating form.…
Teacher Ratings of Principal Applicants: The Significance of Gender and Leadership Style
Burdick, Deborah; Danzig, Arnold
2006-01-01
This paper focuses on the results of a study examining the relationship among gender, leadership style and principal selection. A sample of 64 Arizona elementary teachers participated in the study. Key issues related to gender and leadership style were identified through a literature review, teacher ratings of four fictitious principals, coded…
An Exploratory Study of the Application of Early Childhood Environment Rating Scale Criteria
Warash, Barbara G.; Ward, Corina; Rotilie, Sally
2008-01-01
This study examined whether attending a one day training on the Early Childhood Environment Rating Scale-Revised (ECERS-R) corresponded to pre-k classroom changes. Teachers attended an ECERS-R module training and six months later completed a questionnaire to report any classroom changes. The questionnaire consisted of listing the subscales and…
Chen, Ya-Mei; Chen, Duan-Rung; Chiang, Tung-Liang; Tu, Yu-Kang; Yu, Hsiao-Wei
2016-01-01
Our aim was to identify disablement factors, including predisposing, intra-individual, and extra-individual factors, which predict the rate of change in general functional disability (GFD) in older adults. This study utilized the Taiwan Longitudinal Study on Aging Survey in 1996-2007 (N=3,186). Multiple-indicator latent growth curve modeling was used to examine how 12 disablement factors predicted the rate of change in GFD. GFD trajectories were modeled using Nagi's functional limitations, activities of daily living, and instrumental activities of daily living. Greater age (B=.025), female gender (B=.114), and greater numbers of comorbidities (B=.038) were associated with faster increase in GFD. Education (B=-.005) and participation in physically active leisure time activities (B=-.031) were associated with slower increase in GFD. Our findings add to the understanding of how disablement factors contribute to the rate of change in GFD. Predisposing factors played the main role. However, the factors we found to be associated with the rate of change in GFD in older adults were slightly different from the factors reported in the literature. Decreasing the number of comorbidities and increasing the level of physically active leisure time activity should be considered priorities for preventing disability as people age.
An OFDM System Using Polyphase Filter and DFT Architecture for Very High Data Rate Applications
Kifle, Muli; Andro, Monty; Vanderaar, Mark J.
2001-01-01
This paper presents a conceptual architectural design of a four-channel Orthogonal Frequency Division Multiplexing (OFDM) system with an aggregate information throughput of 622 megabits per second (Mbps). Primary emphasis is placed on the generation and detection of the composite waveform using polyphase filter and Discrete Fourier Transform (DFT) approaches to digitally stack and bandlimit the individual carriers. The four-channel approach enables the implementation of a system that can be both power and bandwidth efficient, yet enough parallelism exists to meet higher data rate goals. It also enables a DC power efficient transmitter that is suitable for on-board satellite systems, and a moderately complex receiver that is suitable for low-cost ground terminals. The major advantage of the system as compared to a single channel system is lower complexity and DC power consumption. This is because the highest sample rate is half that of the single channel system and synchronization can occur at most, depending on the synchronization technique, a quarter of the rate of a single channel system. The major disadvantage is the increased peak-to-average power ratio over the single channel system. Simulation results in a form of bit-error-rate (BER) curves are presented in this paper.
76 FR 75661 - Change in Rates and Classes of General Applicability for Competitive Products
2011-12-02
... in classifications as are necessary to define the new prices. The changes are described generally..., and Package Services, will have new ``Return Service'' branding. E. Commercial First-Class Package... inches Sizes: Flat Rate Box Various sizes as defined in the DMM. -- not to exceed .35 cu. ft...
Shen, Z.; Zeng, Y.
2010-12-01
We present an algorithm to calculate horizontal strain rates through interpolation of a geodetically derived velocity field. To derive a smoothly distributed strain rate field using discrete geodetic observations is an under-determined inverse problem. Therefore a priori information, in the form of weighted smoothing, is required to facilitate the solution. Our method is revised from the previous approaches of Shen et al. (1996, 2007). At a given location, the velocity field in its vicinity is approximated by a linear function of positions and can be represented by two velocity components, three strain rate components, and a rotation rate at that point. The velocity data in the neighborhood, after re-weighting, are used to estimate the field parameters through a least-squares procedure. Data weighting is done with following considerations: (a) Data are weighted according to either the Voronoi cell area of each neighboring site, or the station azimuthal span of two azimuthally adjacent neighboring sites. (b) A distance weighting factor is assigned according to site-to-station distances, in the form of either a Gaussian or quadratic decay function. (c) The distance decay coefficient is determined from setting a minimum total weighting threshold which is defined as the sum of the weighting coefficients for all the data input. We also developed an algorithm to exclude contributions of the non-elastic strain associated with fault creep such as creeping along the Central California Creeping segment of the San Andres fault system. We apply this method to derive the strain rate field for southern California using the SCEC CMM4 velocity field.
Hanson, Bradley D; Gerik, James S; Schneider, Sally M
2010-08-01
Producers of perennial crop nursery stock in California use preplant soil fumigation to meet state phytosanitary requirements. Although methyl bromide (MB) has been phased out in many agricultural industries, it is still the preferred treatment in the perennial nursery industry and is used under Critical Use Exemptions and Quarantine/Preshipment provisions of the Montreal Protocol. The present research was conducted to evaluate reduced-rate MB applications sealed with conventional and low-permeability plastic films compared with the primary alternative material. Reduced rates (100-260 kg ha(-1)) of MB applied in combination with chloropicrin (Pic) and sealed with a low-permeability plastic film provided weed and nematode control similar to the industry standard rate of 392 kg ha(-1) MB:Pic (98:2) sealed with high-density polyethylene (HDPE) film. However, the primary alternative chemical, 1,3-dichloropropene (1,3-D), tended to provide slightly lower pest control even on sites with relatively low plant parasitic nematode, soil-borne pathogen and weed pest pressure. If California regulations change to allow the use of low-permeability films in broadcast fumigant applications, the results of this research suggest that reduced rates of MB in perennial crop nurseries could serve as a bridge strategy until more technically, economically and environmentally acceptable alternatives are developed. Published 2010 by John Wiley & Sons, Ltd.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Candela-Juan, C., E-mail: ccanjuan@gmail.com [Radiation Oncology Department, La Fe University and Polytechnic Hospital, Valencia 46026, Spain and National Dosimetry Centre (CND), Valencia 46009 (Spain); Niatsetski, Y. [Elekta Brachytherapy, Veenendaal 3905 TH (Netherlands); Laarse, R. van der [Quality Radiation Therapy BV, Zeist 3707 HB (Netherlands); Granero, D. [Department of Radiation Physics, ERESA, Hospital General Universitario, Valencia 46014 (Spain); Ballester, F. [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Perez-Calatayud, J. [Radiation Oncology Department, La Fe University and Polytechnic Hospital, Valencia 46026, Spain and Department of Radiotherapy, Clínica Benidorm, Benidorm 03501 (Spain); Vijande, J. [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100, Spain and Instituto de Física Corpuscular (UV-CSIC), Burjassot 46100 (Spain)
2016-04-15
Purpose: The aims of this study were (i) to design a new high-dose-rate (HDR) brachytherapy applicator for treating surface lesions with planning target volumes larger than 3 cm in diameter and up to 5 cm in size, using the microSelectron-HDR or Flexitron afterloader (Elekta Brachytherapy) with a {sup 192}Ir source; (ii) to calculate by means of the Monte Carlo (MC) method the dose distribution for the new applicator when it is placed against a water phantom; and (iii) to validate experimentally the dose distributions in water. Methods: The PENELOPE2008 MC code was used to optimize dwell positions and dwell times. Next, the dose distribution in a water phantom and the leakage dose distribution around the applicator were calculated. Finally, MC data were validated experimentally for a {sup 192}Ir mHDR-v2 source by measuring (i) dose distributions with radiochromic EBT3 films (ISP); (ii) percentage depth–dose (PDD) curve with the parallel-plate ionization chamber Advanced Markus (PTW); and (iii) absolute dose rate with EBT3 films and the PinPoint T31016 (PTW) ionization chamber. Results: The new applicator is made of tungsten alloy (Densimet) and consists of a set of interchangeable collimators. Three catheters are used to allocate the source at prefixed dwell positions with preset weights to produce a homogenous dose distribution at the typical prescription depth of 3 mm in water. The same plan is used for all available collimators. PDD, absolute dose rate per unit of air kerma strength, and off-axis profiles in a cylindrical water phantom are reported. These data can be used for treatment planning. Leakage around the applicator was also scored. The dose distributions, PDD, and absolute dose rate calculated agree within experimental uncertainties with the doses measured: differences of MC data with chamber measurements are up to 0.8% and with radiochromic films are up to 3.5%. Conclusions: The new applicator and the dosimetric data provided here will be a valuable
Candela-Juan, C; Niatsetski, Y; van der Laarse, R; Granero, D; Ballester, F; Perez-Calatayud, J; Vijande, J
2016-04-01
The aims of this study were (i) to design a new high-dose-rate (HDR) brachytherapy applicator for treating surface lesions with planning target volumes larger than 3 cm in diameter and up to 5 cm in size, using the microSelectron-HDR or Flexitron afterloader (Elekta Brachytherapy) with a (192)Ir source; (ii) to calculate by means of the Monte Carlo (MC) method the dose distribution for the new applicator when it is placed against a water phantom; and (iii) to validate experimentally the dose distributions in water. The penelope2008 MC code was used to optimize dwell positions and dwell times. Next, the dose distribution in a water phantom and the leakage dose distribution around the applicator were calculated. Finally, MC data were validated experimentally for a (192)Ir mHDR-v2 source by measuring (i) dose distributions with radiochromic EBT3 films (ISP); (ii) percentage depth-dose (PDD) curve with the parallel-plate ionization chamber Advanced Markus (PTW); and (iii) absolute dose rate with EBT3 films and the PinPoint T31016 (PTW) ionization chamber. The new applicator is made of tungsten alloy (Densimet) and consists of a set of interchangeable collimators. Three catheters are used to allocate the source at prefixed dwell positions with preset weights to produce a homogenous dose distribution at the typical prescription depth of 3 mm in water. The same plan is used for all available collimators. PDD, absolute dose rate per unit of air kerma strength, and off-axis profiles in a cylindrical water phantom are reported. These data can be used for treatment planning. Leakage around the applicator was also scored. The dose distributions, PDD, and absolute dose rate calculated agree within experimental uncertainties with the doses measured: differences of MC data with chamber measurements are up to 0.8% and with radiochromic films are up to 3.5%. The new applicator and the dosimetric data provided here will be a valuable tool in clinical practice, making treatment of
APPLICATION OF ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IN INTEREST RATES EFFECTS ON STOCK RETURNS
ELEFTHERIOS GIOVANIS
2011-02-01
Full Text Available In the current study we examine the effects of interest rate changes on common stock returns of Greek banking sector. We examine theGeneralized Autoregressive Heteroskedasticity (GARCH process and an Adaptive Neuro-Fuzzy Inference System (ANFIS. The conclusions of our findings are that the changes of interest rates, based on GARCH model, are insignificant on common stock returns during the period we examine. On the other hand, with ANFIS we can get the rules and in each case we can have positive or negative effects depending on the conditions and the firing rules of inputs, which information is not possible to be retrieved with the traditional econometric modelling. Furthermore we examine the forecasting performance of both models and we conclude that ANFIS outperforms GARCH model in both in-sample and out-of-sample periods.
Application of Calspan pitch rate control system to the Space Shuttle for approach and landing
Weingarten, N. C.; Chalk, C. R.
1983-01-01
A pitch rate control system designed for use in the shuttle during approach and landing was analyzed and compared with a revised control system developed by NASA and the existing OFT control system. The design concept control system uses filtered pitch rate feedback with proportional plus integral paths in the forward loop. Control system parameters were designed as a function of flight configuration. Analysis included time and frequency domain techniques. Results indicate that both the Calspan and NASA systems significantly improve the flying qualities of the shuttle over the OFT. Better attitude and flight path control and less time delay are the primary reasons. The Calspan system is preferred because of reduced time delay and simpler mechanization. Further testing of the improved flight control systems in an in-flight simulator is recommended.
Hunt, Allen G; Ghanbarian, Behzad
2013-01-01
We apply our theory of conservative solute transport, based on concepts from percolation theory, directly and without modification to reactive solute transport. This theory has previously been shown to predict the observed range of dispersivity values for conservative solute transport over ten orders of magnitude of length scale. We now show that the temporal dependence derived for the solute velocity accurately predicts the time-dependence for the weathering of silicate minerals over nine orders of magnitude of time scale, while its predicted length dependence agrees with data obtained for reaction rates over five orders of magnitude of length scale. In both cases, it is possible to unify lab and field results. Thus, net reaction rates appear to be limited by solute transport velocities. We suggest the possible relevance of our results to landscape evolution of the earth's terrestrial surface.
Ambus, P.; Kure, L.K.; Jensen, E.S.
2002-01-01
Gross N mineralization and immobilization was examined in soil amended with compost and sewage sludge on seven occasions during a year using N-15 pool dilution and enrichment techniques. Gross N mineralization was initially stimulated with both wastes and accelerated through the first 112 days...... of incubation, peaking at 5 mg N.kg(-1).d(-1) with compost compared with 4 mg N.kg(-1).d(-1) in control and sludge-treated soil. The magnitudes of mineralization rates exceeded those of immobilization by on average 6.3 ( compost) and 11.4 ( sludge) times, leading to a persistent net N mineralization cumulating...... up to 160 mg N.kg(-1) soil(compost) and 54 mg N.kg(-1) soil (sludge) over the season from May to November. The numerical model FLUAZ comprehensively predicted rates of gross mineralization and immobilization. Sludge exhibited an early season N-release, whereas compost released only 10% of the N...
Hernando, David; Hernando, Alberto; Casajús, Jose A; Laguna, Pablo; Garatachea, Nuria; Bailón, Raquel
2017-09-26
Standard methodologies of heart rate variability analysis and physiological interpretation as a marker of autonomic nervous system condition have been largely published at rest, but not so much during exercise. A methodological framework for heart rate variability (HRV) analysis during exercise is proposed, which deals with the non-stationary nature of HRV during exercise, includes respiratory information, and identifies and corrects spectral components related to cardiolocomotor coupling (CC). This is applied to 23 male subjects who underwent different tests: maximal and submaximal, running and cycling; where the ECG, respiratory frequency and oxygen consumption were simultaneously recorded. High-frequency (HF) power results largely modified from estimations with the standard fixed band to those obtained with the proposed methodology. For medium and high levels of exercise and recovery, HF power results in a 20 to 40% increase. When cycling, HF power increases around 40% with respect to running, while CC power is around 20% stronger in running.
Open-source hardware and software and web application for gamma dose rate network operation.
Luff, R; Zähringer, M; Harms, W; Bleher, M; Prommer, B; Stöhlker, U
2014-08-01
The German Federal Office for Radiation Protection operates a network of about 1800 gamma dose rate stations as a part of the national emergency preparedness plan. Each of the six network centres is capable of operating the network alone. Most of the used hardware and software have been developed in-house under open-source license. Short development cycles and close cooperation between developers and users ensure robustness, transparency and fast maintenance procedures, thus avoiding unnecessary complex solutions. This also reduces the overall costs of the network operation. An easy-to-expand web interface has been developed to make the complete system available to other interested network operators in order to increase cooperation between different countries. The interface is also regularly in use for education during scholarships of trainees supported, e.g. by the 'International Atomic Energy Agency' to operate a local area dose rate monitoring test network.
Recurrence Plot Based Measures of Complexity and its Application to Heart Rate Variability Data
Marwan, N; Meyerfeldt, U; Schirdewan, A; Kurths, J
2002-01-01
In complex systems the knowledge of transitions between regular, laminar or chaotic behavior is essential to understand the processes going on there. Linear approaches are often not sufficient to describe these processes and several nonlinear methods require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart rate variability data. For the logistic map these measures enable us to detect transitions between chaotic and periodic states, as well as to identify additional laminar states, i.e. chaos-chaos transitions. Traditional recurrence quantification analysis fails to detect these latter transitions. Applying our new measures to the heart rate variability data, we are able to detect and quantify laminar phases before a life-threatening cardiac arrhythmia and, thus, to enable a prediction of such an event. Our findings could be of importance for the therapy of mal...
Touati, Sarah; Naylor, Mark; Main, Ian
2016-02-01
The recent spate of mega-earthquakes since 2004 has led to speculation of an underlying change in the global `background' rate of large events. At a regional scale, detecting changes in background rate is also an important practical problem for operational forecasting and risk calculation, for example due to volcanic processes, seismicity induced by fluid injection or withdrawal, or due to redistribution of Coulomb stress after natural large events. Here we examine the general problem of detecting changes in background rate in earthquake catalogues with and without correlated events, for the first time using the Bayes factor as a discriminant for models of varying complexity. First we use synthetic Poisson (purely random) and Epidemic-Type Aftershock Sequence (ETAS) models (which also allow for earthquake triggering) to test the effectiveness of many standard methods of addressing this question. These fall into two classes: those that evaluate the relative likelihood of different models, for example using Information Criteria or the Bayes Factor; and those that evaluate the probability of the observations (including extreme events or clusters of events) under a single null hypothesis, for example by applying the Kolmogorov-Smirnov and `runs' tests, and a variety of Z-score tests. The results demonstrate that the effectiveness among these tests varies widely. Information Criteria worked at least as well as the more computationally expensive Bayes factor method, and the Kolmogorov-Smirnov and runs tests proved to be the relatively ineffective in reliably detecting a change point. We then apply the methods tested to events at different thresholds above magnitude M ≥ 7 in the global earthquake catalogue since 1918, after first declustering the catalogue. This is most effectively done by removing likely correlated events using a much lower magnitude threshold (M ≥ 5), where triggering is much more obvious. We find no strong evidence that the background rate of large
Application of Artificial Neural Network in Predicting the Survival Rate of Gastric Cancer Patients
Biglarian, A; E. Hajizadeh; Kazemnejad, A; Zali, MR
2011-01-01
"nBackground: The aim of this study was to predict the survival rate of Iranian gastric cancer patients using the Cox proportional hazard and artificial neural network models as well as comparing the ability of these approaches in predicting the survival of these patients."nMethods: In this historical cohort study, the data gathered from 436 registered gastric cancer patients who have had surgery between 2002 and 2007 at the Taleghani Hospital (a referral center for gastrointestinal...